00:00:00.001 Started by upstream project "autotest-per-patch" build number 126115 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.083 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.117 Fetching changes from the remote Git repository 00:00:00.120 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.180 > git --version # 'git version 2.39.2' 00:00:00.180 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.201 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.201 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.699 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.709 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.720 Checking out Revision 308e970df89ed396a3f9dcf22fba8891259694e4 (FETCH_HEAD) 00:00:06.720 > git config core.sparsecheckout # timeout=10 00:00:06.733 > git read-tree -mu HEAD # timeout=10 00:00:06.749 > git checkout -f 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=5 00:00:06.767 Commit message: "jjb/create-perf-report: make job run concurrent" 00:00:06.767 > git rev-list --no-walk 308e970df89ed396a3f9dcf22fba8891259694e4 # timeout=10 00:00:06.852 [Pipeline] Start of Pipeline 00:00:06.863 [Pipeline] library 00:00:06.865 Loading library shm_lib@master 00:00:06.865 Library shm_lib@master is cached. Copying from home. 00:00:06.878 [Pipeline] node 00:00:06.885 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.887 [Pipeline] { 00:00:06.896 [Pipeline] catchError 00:00:06.898 [Pipeline] { 00:00:06.909 [Pipeline] wrap 00:00:06.917 [Pipeline] { 00:00:06.922 [Pipeline] stage 00:00:06.924 [Pipeline] { (Prologue) 00:00:06.937 [Pipeline] echo 00:00:06.938 Node: VM-host-SM17 00:00:06.942 [Pipeline] cleanWs 00:00:06.949 [WS-CLEANUP] Deleting project workspace... 00:00:06.949 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.955 [WS-CLEANUP] done 00:00:07.156 [Pipeline] setCustomBuildProperty 00:00:07.235 [Pipeline] httpRequest 00:00:07.261 [Pipeline] echo 00:00:07.263 Sorcerer 10.211.164.101 is alive 00:00:07.270 [Pipeline] httpRequest 00:00:07.274 HttpMethod: GET 00:00:07.274 URL: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:07.275 Sending request to url: http://10.211.164.101/packages/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:07.283 Response Code: HTTP/1.1 200 OK 00:00:07.284 Success: Status code 200 is in the accepted range: 200,404 00:00:07.284 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:09.442 [Pipeline] sh 00:00:09.722 + tar --no-same-owner -xf jbp_308e970df89ed396a3f9dcf22fba8891259694e4.tar.gz 00:00:09.751 [Pipeline] httpRequest 00:00:09.774 [Pipeline] echo 00:00:09.775 Sorcerer 10.211.164.101 is alive 00:00:09.781 [Pipeline] httpRequest 00:00:09.784 HttpMethod: GET 00:00:09.784 URL: http://10.211.164.101/packages/spdk_aebb775b19b829d8c3ab02f61fe44c98e7c6c075.tar.gz 00:00:09.784 Sending request to url: http://10.211.164.101/packages/spdk_aebb775b19b829d8c3ab02f61fe44c98e7c6c075.tar.gz 00:00:09.785 Response Code: HTTP/1.1 200 OK 00:00:09.786 Success: Status code 200 is in the accepted range: 200,404 00:00:09.786 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_aebb775b19b829d8c3ab02f61fe44c98e7c6c075.tar.gz 00:00:27.064 [Pipeline] sh 00:00:27.346 + tar --no-same-owner -xf spdk_aebb775b19b829d8c3ab02f61fe44c98e7c6c075.tar.gz 00:00:29.884 [Pipeline] sh 00:00:30.229 + git -C spdk log --oneline -n5 00:00:30.229 aebb775b1 bdev/nvme: Remove assert() to detect ANA change for inactive namespace 00:00:30.229 719d03c6a sock/uring: only register net impl if supported 00:00:30.229 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:30.229 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:30.229 6c7c1f57e accel: add sequence outstanding stat 00:00:30.251 [Pipeline] writeFile 00:00:30.266 [Pipeline] sh 00:00:30.541 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.555 [Pipeline] sh 00:00:30.836 + cat autorun-spdk.conf 00:00:30.836 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.836 SPDK_TEST_NVMF=1 00:00:30.836 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.836 SPDK_TEST_URING=1 00:00:30.836 SPDK_TEST_USDT=1 00:00:30.836 SPDK_RUN_UBSAN=1 00:00:30.836 NET_TYPE=virt 00:00:30.836 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.842 RUN_NIGHTLY=0 00:00:30.846 [Pipeline] } 00:00:30.866 [Pipeline] // stage 00:00:30.884 [Pipeline] stage 00:00:30.887 [Pipeline] { (Run VM) 00:00:30.906 [Pipeline] sh 00:00:31.185 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.185 + echo 'Start stage prepare_nvme.sh' 00:00:31.185 Start stage prepare_nvme.sh 00:00:31.185 + [[ -n 4 ]] 00:00:31.185 + disk_prefix=ex4 00:00:31.185 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:31.185 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:31.185 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:31.185 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.185 ++ SPDK_TEST_NVMF=1 00:00:31.185 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.185 ++ SPDK_TEST_URING=1 00:00:31.185 ++ SPDK_TEST_USDT=1 00:00:31.185 ++ SPDK_RUN_UBSAN=1 00:00:31.185 ++ NET_TYPE=virt 00:00:31.185 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.185 ++ RUN_NIGHTLY=0 00:00:31.185 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:31.185 + nvme_files=() 00:00:31.185 + declare -A nvme_files 00:00:31.185 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.185 + nvme_files['nvme.img']=5G 00:00:31.185 + nvme_files['nvme-cmb.img']=5G 00:00:31.185 + nvme_files['nvme-multi0.img']=4G 00:00:31.185 + nvme_files['nvme-multi1.img']=4G 00:00:31.185 + nvme_files['nvme-multi2.img']=4G 00:00:31.186 + nvme_files['nvme-openstack.img']=8G 00:00:31.186 + nvme_files['nvme-zns.img']=5G 00:00:31.186 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.186 + (( SPDK_TEST_FTL == 1 )) 00:00:31.186 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.186 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.186 + for nvme in "${!nvme_files[@]}" 00:00:31.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:31.186 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.186 + for nvme in "${!nvme_files[@]}" 00:00:31.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:31.186 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.186 + for nvme in "${!nvme_files[@]}" 00:00:31.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:31.186 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.186 + for nvme in "${!nvme_files[@]}" 00:00:31.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:31.186 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.186 + for nvme in "${!nvme_files[@]}" 00:00:31.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:31.186 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.186 + for nvme in "${!nvme_files[@]}" 00:00:31.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:31.186 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.186 + for nvme in "${!nvme_files[@]}" 00:00:31.186 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:31.752 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.752 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:31.752 + echo 'End stage prepare_nvme.sh' 00:00:31.752 End stage prepare_nvme.sh 00:00:32.021 [Pipeline] sh 00:00:32.299 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.299 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:00:32.299 00:00:32.299 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:32.299 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:32.299 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:32.299 HELP=0 00:00:32.299 DRY_RUN=0 00:00:32.299 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:32.299 NVME_DISKS_TYPE=nvme,nvme, 00:00:32.299 NVME_AUTO_CREATE=0 00:00:32.299 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:32.299 NVME_CMB=,, 00:00:32.299 NVME_PMR=,, 00:00:32.299 NVME_ZNS=,, 00:00:32.299 NVME_MS=,, 00:00:32.299 NVME_FDP=,, 00:00:32.299 SPDK_VAGRANT_DISTRO=fedora38 00:00:32.299 SPDK_VAGRANT_VMCPU=10 00:00:32.299 SPDK_VAGRANT_VMRAM=12288 00:00:32.299 SPDK_VAGRANT_PROVIDER=libvirt 00:00:32.299 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:32.299 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:32.299 SPDK_OPENSTACK_NETWORK=0 00:00:32.299 VAGRANT_PACKAGE_BOX=0 00:00:32.299 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:32.299 FORCE_DISTRO=true 00:00:32.299 VAGRANT_BOX_VERSION= 00:00:32.299 EXTRA_VAGRANTFILES= 00:00:32.299 NIC_MODEL=e1000 00:00:32.299 00:00:32.299 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:32.299 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:34.830 Bringing machine 'default' up with 'libvirt' provider... 00:00:35.770 ==> default: Creating image (snapshot of base box volume). 00:00:35.770 ==> default: Creating domain with the following settings... 00:00:35.770 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720783478_5e71df3bcd1cd999d2ea 00:00:35.770 ==> default: -- Domain type: kvm 00:00:35.770 ==> default: -- Cpus: 10 00:00:35.770 ==> default: -- Feature: acpi 00:00:35.770 ==> default: -- Feature: apic 00:00:35.770 ==> default: -- Feature: pae 00:00:35.770 ==> default: -- Memory: 12288M 00:00:35.770 ==> default: -- Memory Backing: hugepages: 00:00:35.770 ==> default: -- Management MAC: 00:00:35.770 ==> default: -- Loader: 00:00:35.770 ==> default: -- Nvram: 00:00:35.770 ==> default: -- Base box: spdk/fedora38 00:00:35.770 ==> default: -- Storage pool: default 00:00:35.770 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720783478_5e71df3bcd1cd999d2ea.img (20G) 00:00:35.770 ==> default: -- Volume Cache: default 00:00:35.770 ==> default: -- Kernel: 00:00:35.770 ==> default: -- Initrd: 00:00:35.770 ==> default: -- Graphics Type: vnc 00:00:35.770 ==> default: -- Graphics Port: -1 00:00:35.770 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.770 ==> default: -- Graphics Password: Not defined 00:00:35.770 ==> default: -- Video Type: cirrus 00:00:35.770 ==> default: -- Video VRAM: 9216 00:00:35.770 ==> default: -- Sound Type: 00:00:35.770 ==> default: -- Keymap: en-us 00:00:35.770 ==> default: -- TPM Path: 00:00:35.770 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.770 ==> default: -- Command line args: 00:00:35.770 ==> default: -> value=-device, 00:00:35.770 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:35.770 ==> default: -> value=-drive, 00:00:35.770 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:35.770 ==> default: -> value=-device, 00:00:35.770 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.770 ==> default: -> value=-device, 00:00:35.770 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:35.770 ==> default: -> value=-drive, 00:00:35.770 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:35.770 ==> default: -> value=-device, 00:00:35.770 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.770 ==> default: -> value=-drive, 00:00:35.771 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:35.771 ==> default: -> value=-device, 00:00:35.771 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.771 ==> default: -> value=-drive, 00:00:35.771 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:35.771 ==> default: -> value=-device, 00:00:35.771 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.029 ==> default: Creating shared folders metadata... 00:00:36.029 ==> default: Starting domain. 00:00:37.407 ==> default: Waiting for domain to get an IP address... 00:00:55.508 ==> default: Waiting for SSH to become available... 00:00:55.508 ==> default: Configuring and enabling network interfaces... 00:00:58.036 default: SSH address: 192.168.121.15:22 00:00:58.036 default: SSH username: vagrant 00:00:58.036 default: SSH auth method: private key 00:00:59.937 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:08.053 ==> default: Mounting SSHFS shared folder... 00:01:08.989 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:08.989 ==> default: Checking Mount.. 00:01:10.366 ==> default: Folder Successfully Mounted! 00:01:10.367 ==> default: Running provisioner: file... 00:01:11.303 default: ~/.gitconfig => .gitconfig 00:01:11.562 00:01:11.562 SUCCESS! 00:01:11.562 00:01:11.562 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:11.562 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:11.562 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:11.562 00:01:11.571 [Pipeline] } 00:01:11.589 [Pipeline] // stage 00:01:11.597 [Pipeline] dir 00:01:11.598 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:11.600 [Pipeline] { 00:01:11.615 [Pipeline] catchError 00:01:11.616 [Pipeline] { 00:01:11.628 [Pipeline] sh 00:01:11.910 + vagrant ssh-config --host vagrant 00:01:11.910 + sed -ne /^Host/,$p 00:01:11.910 + tee ssh_conf 00:01:15.201 Host vagrant 00:01:15.201 HostName 192.168.121.15 00:01:15.201 User vagrant 00:01:15.201 Port 22 00:01:15.201 UserKnownHostsFile /dev/null 00:01:15.201 StrictHostKeyChecking no 00:01:15.201 PasswordAuthentication no 00:01:15.201 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:15.201 IdentitiesOnly yes 00:01:15.201 LogLevel FATAL 00:01:15.201 ForwardAgent yes 00:01:15.201 ForwardX11 yes 00:01:15.201 00:01:15.214 [Pipeline] withEnv 00:01:15.216 [Pipeline] { 00:01:15.233 [Pipeline] sh 00:01:15.514 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:15.514 source /etc/os-release 00:01:15.514 [[ -e /image.version ]] && img=$(< /image.version) 00:01:15.514 # Minimal, systemd-like check. 00:01:15.514 if [[ -e /.dockerenv ]]; then 00:01:15.514 # Clear garbage from the node's name: 00:01:15.514 # agt-er_autotest_547-896 -> autotest_547-896 00:01:15.514 # $HOSTNAME is the actual container id 00:01:15.514 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:15.514 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:15.514 # We can assume this is a mount from a host where container is running, 00:01:15.514 # so fetch its hostname to easily identify the target swarm worker. 00:01:15.514 container="$(< /etc/hostname) ($agent)" 00:01:15.514 else 00:01:15.514 # Fallback 00:01:15.514 container=$agent 00:01:15.514 fi 00:01:15.514 fi 00:01:15.514 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:15.514 00:01:15.783 [Pipeline] } 00:01:15.807 [Pipeline] // withEnv 00:01:15.816 [Pipeline] setCustomBuildProperty 00:01:15.833 [Pipeline] stage 00:01:15.836 [Pipeline] { (Tests) 00:01:15.858 [Pipeline] sh 00:01:16.136 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:16.411 [Pipeline] sh 00:01:16.696 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:16.973 [Pipeline] timeout 00:01:16.973 Timeout set to expire in 30 min 00:01:16.976 [Pipeline] { 00:01:16.993 [Pipeline] sh 00:01:17.368 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:17.950 HEAD is now at aebb775b1 bdev/nvme: Remove assert() to detect ANA change for inactive namespace 00:01:17.964 [Pipeline] sh 00:01:18.243 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:18.513 [Pipeline] sh 00:01:18.791 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:18.807 [Pipeline] sh 00:01:19.086 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:19.086 ++ readlink -f spdk_repo 00:01:19.086 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:19.086 + [[ -n /home/vagrant/spdk_repo ]] 00:01:19.086 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:19.086 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:19.086 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:19.086 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:19.086 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:19.086 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:19.086 + cd /home/vagrant/spdk_repo 00:01:19.086 + source /etc/os-release 00:01:19.086 ++ NAME='Fedora Linux' 00:01:19.086 ++ VERSION='38 (Cloud Edition)' 00:01:19.086 ++ ID=fedora 00:01:19.086 ++ VERSION_ID=38 00:01:19.086 ++ VERSION_CODENAME= 00:01:19.086 ++ PLATFORM_ID=platform:f38 00:01:19.086 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:19.086 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.086 ++ LOGO=fedora-logo-icon 00:01:19.086 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:19.086 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.086 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:19.086 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.086 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.086 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.086 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:19.087 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.087 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:19.087 ++ SUPPORT_END=2024-05-14 00:01:19.087 ++ VARIANT='Cloud Edition' 00:01:19.087 ++ VARIANT_ID=cloud 00:01:19.087 + uname -a 00:01:19.087 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:19.087 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:19.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:19.654 Hugepages 00:01:19.654 node hugesize free / total 00:01:19.654 node0 1048576kB 0 / 0 00:01:19.654 node0 2048kB 0 / 0 00:01:19.654 00:01:19.654 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.654 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:19.654 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:19.654 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:19.654 + rm -f /tmp/spdk-ld-path 00:01:19.654 + source autorun-spdk.conf 00:01:19.654 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.654 ++ SPDK_TEST_NVMF=1 00:01:19.654 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.654 ++ SPDK_TEST_URING=1 00:01:19.654 ++ SPDK_TEST_USDT=1 00:01:19.654 ++ SPDK_RUN_UBSAN=1 00:01:19.654 ++ NET_TYPE=virt 00:01:19.654 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.654 ++ RUN_NIGHTLY=0 00:01:19.654 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.654 + [[ -n '' ]] 00:01:19.654 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:19.912 + for M in /var/spdk/build-*-manifest.txt 00:01:19.912 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.912 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.912 + for M in /var/spdk/build-*-manifest.txt 00:01:19.912 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.912 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.912 ++ uname 00:01:19.912 + [[ Linux == \L\i\n\u\x ]] 00:01:19.912 + sudo dmesg -T 00:01:19.912 + sudo dmesg --clear 00:01:19.912 + dmesg_pid=5110 00:01:19.912 + [[ Fedora Linux == FreeBSD ]] 00:01:19.912 + sudo dmesg -Tw 00:01:19.912 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.912 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.912 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.912 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.912 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.912 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.912 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.912 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.912 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.912 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.912 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.912 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.912 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.912 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.912 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.912 Test configuration: 00:01:19.912 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.912 SPDK_TEST_NVMF=1 00:01:19.912 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.912 SPDK_TEST_URING=1 00:01:19.912 SPDK_TEST_USDT=1 00:01:19.912 SPDK_RUN_UBSAN=1 00:01:19.912 NET_TYPE=virt 00:01:19.912 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.912 RUN_NIGHTLY=0 11:25:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:19.912 11:25:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.912 11:25:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.912 11:25:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.912 11:25:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.912 11:25:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.912 11:25:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.912 11:25:23 -- paths/export.sh@5 -- $ export PATH 00:01:19.912 11:25:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.912 11:25:23 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:19.912 11:25:23 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:19.913 11:25:23 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720783523.XXXXXX 00:01:19.913 11:25:23 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720783523.suVT2m 00:01:19.913 11:25:23 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:19.913 11:25:23 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:19.913 11:25:23 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:19.913 11:25:23 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:19.913 11:25:23 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.913 11:25:23 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:19.913 11:25:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:19.913 11:25:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.913 11:25:23 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:19.913 11:25:23 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:19.913 11:25:23 -- pm/common@17 -- $ local monitor 00:01:19.913 11:25:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.913 11:25:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.913 11:25:23 -- pm/common@25 -- $ sleep 1 00:01:19.913 11:25:23 -- pm/common@21 -- $ date +%s 00:01:19.913 11:25:23 -- pm/common@21 -- $ date +%s 00:01:19.913 11:25:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720783523 00:01:19.913 11:25:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720783523 00:01:20.171 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720783523_collect-vmstat.pm.log 00:01:20.171 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720783523_collect-cpu-load.pm.log 00:01:21.105 11:25:24 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:21.105 11:25:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.105 11:25:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.105 11:25:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:21.105 11:25:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.105 Fri Jul 12 11:25:24 AM UTC 2024 00:01:21.105 11:25:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.105 v24.09-pre-203-gaebb775b1 00:01:21.105 11:25:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.105 11:25:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.105 11:25:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.105 11:25:24 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:21.105 11:25:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:21.105 11:25:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.105 ************************************ 00:01:21.105 START TEST ubsan 00:01:21.105 ************************************ 00:01:21.105 using ubsan 00:01:21.105 11:25:24 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:21.105 00:01:21.105 real 0m0.000s 00:01:21.105 user 0m0.000s 00:01:21.105 sys 0m0.000s 00:01:21.105 11:25:24 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:21.105 ************************************ 00:01:21.105 END TEST ubsan 00:01:21.105 ************************************ 00:01:21.105 11:25:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.105 11:25:24 -- common/autotest_common.sh@1142 -- $ return 0 00:01:21.105 11:25:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.105 11:25:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.105 11:25:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.105 11:25:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.106 11:25:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.106 11:25:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.106 11:25:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.106 11:25:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.106 11:25:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:21.106 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:21.106 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:21.673 Using 'verbs' RDMA provider 00:01:37.587 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:49.783 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:49.783 Creating mk/config.mk...done. 00:01:49.783 Creating mk/cc.flags.mk...done. 00:01:49.783 Type 'make' to build. 00:01:49.783 11:25:51 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:49.783 11:25:51 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:49.783 11:25:51 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:49.783 11:25:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.783 ************************************ 00:01:49.783 START TEST make 00:01:49.783 ************************************ 00:01:49.783 11:25:51 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:49.783 make[1]: Nothing to be done for 'all'. 00:01:59.860 The Meson build system 00:01:59.860 Version: 1.3.1 00:01:59.860 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:59.860 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:59.860 Build type: native build 00:01:59.860 Program cat found: YES (/usr/bin/cat) 00:01:59.860 Project name: DPDK 00:01:59.860 Project version: 24.03.0 00:01:59.860 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.860 C linker for the host machine: cc ld.bfd 2.39-16 00:01:59.860 Host machine cpu family: x86_64 00:01:59.860 Host machine cpu: x86_64 00:01:59.860 Message: ## Building in Developer Mode ## 00:01:59.860 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.860 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.860 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.860 Program python3 found: YES (/usr/bin/python3) 00:01:59.860 Program cat found: YES (/usr/bin/cat) 00:01:59.860 Compiler for C supports arguments -march=native: YES 00:01:59.860 Checking for size of "void *" : 8 00:01:59.860 Checking for size of "void *" : 8 (cached) 00:01:59.860 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:59.860 Library m found: YES 00:01:59.860 Library numa found: YES 00:01:59.860 Has header "numaif.h" : YES 00:01:59.860 Library fdt found: NO 00:01:59.860 Library execinfo found: NO 00:01:59.860 Has header "execinfo.h" : YES 00:01:59.860 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.860 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.860 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.860 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.860 Run-time dependency openssl found: YES 3.0.9 00:01:59.860 Run-time dependency libpcap found: YES 1.10.4 00:01:59.860 Has header "pcap.h" with dependency libpcap: YES 00:01:59.860 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.860 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.860 Compiler for C supports arguments -Wformat: YES 00:01:59.860 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.860 Compiler for C supports arguments -Wformat-security: NO 00:01:59.860 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.860 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.860 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.860 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.860 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.861 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.861 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.861 Compiler for C supports arguments -Wundef: YES 00:01:59.861 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.861 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.861 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.861 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.861 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.861 Program objdump found: YES (/usr/bin/objdump) 00:01:59.861 Compiler for C supports arguments -mavx512f: YES 00:01:59.861 Checking if "AVX512 checking" compiles: YES 00:01:59.861 Fetching value of define "__SSE4_2__" : 1 00:01:59.861 Fetching value of define "__AES__" : 1 00:01:59.861 Fetching value of define "__AVX__" : 1 00:01:59.861 Fetching value of define "__AVX2__" : 1 00:01:59.861 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.861 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.861 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.861 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.861 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.861 Fetching value of define "__PCLMUL__" : 1 00:01:59.861 Fetching value of define "__RDRND__" : 1 00:01:59.861 Fetching value of define "__RDSEED__" : 1 00:01:59.861 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.861 Fetching value of define "__znver1__" : (undefined) 00:01:59.861 Fetching value of define "__znver2__" : (undefined) 00:01:59.861 Fetching value of define "__znver3__" : (undefined) 00:01:59.861 Fetching value of define "__znver4__" : (undefined) 00:01:59.861 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.861 Message: lib/log: Defining dependency "log" 00:01:59.861 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.861 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.861 Checking for function "getentropy" : NO 00:01:59.861 Message: lib/eal: Defining dependency "eal" 00:01:59.861 Message: lib/ring: Defining dependency "ring" 00:01:59.861 Message: lib/rcu: Defining dependency "rcu" 00:01:59.861 Message: lib/mempool: Defining dependency "mempool" 00:01:59.861 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.861 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.861 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.861 Compiler for C supports arguments -mpclmul: YES 00:01:59.861 Compiler for C supports arguments -maes: YES 00:01:59.861 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.861 Compiler for C supports arguments -mavx512bw: YES 00:01:59.861 Compiler for C supports arguments -mavx512dq: YES 00:01:59.861 Compiler for C supports arguments -mavx512vl: YES 00:01:59.861 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.861 Compiler for C supports arguments -mavx2: YES 00:01:59.861 Compiler for C supports arguments -mavx: YES 00:01:59.861 Message: lib/net: Defining dependency "net" 00:01:59.861 Message: lib/meter: Defining dependency "meter" 00:01:59.861 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.861 Message: lib/pci: Defining dependency "pci" 00:01:59.861 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.861 Message: lib/hash: Defining dependency "hash" 00:01:59.861 Message: lib/timer: Defining dependency "timer" 00:01:59.861 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.861 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.861 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.861 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.861 Message: lib/power: Defining dependency "power" 00:01:59.861 Message: lib/reorder: Defining dependency "reorder" 00:01:59.861 Message: lib/security: Defining dependency "security" 00:01:59.861 Has header "linux/userfaultfd.h" : YES 00:01:59.861 Has header "linux/vduse.h" : YES 00:01:59.861 Message: lib/vhost: Defining dependency "vhost" 00:01:59.861 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.861 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.861 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.861 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.861 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.861 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.861 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.861 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.861 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.861 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.861 Program doxygen found: YES (/usr/bin/doxygen) 00:01:59.861 Configuring doxy-api-html.conf using configuration 00:01:59.861 Configuring doxy-api-man.conf using configuration 00:01:59.861 Program mandb found: YES (/usr/bin/mandb) 00:01:59.861 Program sphinx-build found: NO 00:01:59.861 Configuring rte_build_config.h using configuration 00:01:59.861 Message: 00:01:59.861 ================= 00:01:59.861 Applications Enabled 00:01:59.861 ================= 00:01:59.861 00:01:59.861 apps: 00:01:59.861 00:01:59.861 00:01:59.861 Message: 00:01:59.861 ================= 00:01:59.861 Libraries Enabled 00:01:59.861 ================= 00:01:59.861 00:01:59.861 libs: 00:01:59.861 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.861 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.861 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.861 00:01:59.861 Message: 00:01:59.861 =============== 00:01:59.861 Drivers Enabled 00:01:59.861 =============== 00:01:59.861 00:01:59.861 common: 00:01:59.861 00:01:59.861 bus: 00:01:59.861 pci, vdev, 00:01:59.861 mempool: 00:01:59.861 ring, 00:01:59.861 dma: 00:01:59.861 00:01:59.861 net: 00:01:59.861 00:01:59.861 crypto: 00:01:59.861 00:01:59.861 compress: 00:01:59.861 00:01:59.861 vdpa: 00:01:59.861 00:01:59.861 00:01:59.861 Message: 00:01:59.861 ================= 00:01:59.861 Content Skipped 00:01:59.861 ================= 00:01:59.861 00:01:59.861 apps: 00:01:59.861 dumpcap: explicitly disabled via build config 00:01:59.861 graph: explicitly disabled via build config 00:01:59.861 pdump: explicitly disabled via build config 00:01:59.861 proc-info: explicitly disabled via build config 00:01:59.861 test-acl: explicitly disabled via build config 00:01:59.861 test-bbdev: explicitly disabled via build config 00:01:59.861 test-cmdline: explicitly disabled via build config 00:01:59.861 test-compress-perf: explicitly disabled via build config 00:01:59.861 test-crypto-perf: explicitly disabled via build config 00:01:59.861 test-dma-perf: explicitly disabled via build config 00:01:59.861 test-eventdev: explicitly disabled via build config 00:01:59.861 test-fib: explicitly disabled via build config 00:01:59.861 test-flow-perf: explicitly disabled via build config 00:01:59.861 test-gpudev: explicitly disabled via build config 00:01:59.861 test-mldev: explicitly disabled via build config 00:01:59.861 test-pipeline: explicitly disabled via build config 00:01:59.861 test-pmd: explicitly disabled via build config 00:01:59.861 test-regex: explicitly disabled via build config 00:01:59.861 test-sad: explicitly disabled via build config 00:01:59.861 test-security-perf: explicitly disabled via build config 00:01:59.861 00:01:59.861 libs: 00:01:59.861 argparse: explicitly disabled via build config 00:01:59.861 metrics: explicitly disabled via build config 00:01:59.861 acl: explicitly disabled via build config 00:01:59.861 bbdev: explicitly disabled via build config 00:01:59.861 bitratestats: explicitly disabled via build config 00:01:59.861 bpf: explicitly disabled via build config 00:01:59.861 cfgfile: explicitly disabled via build config 00:01:59.861 distributor: explicitly disabled via build config 00:01:59.861 efd: explicitly disabled via build config 00:01:59.861 eventdev: explicitly disabled via build config 00:01:59.861 dispatcher: explicitly disabled via build config 00:01:59.861 gpudev: explicitly disabled via build config 00:01:59.861 gro: explicitly disabled via build config 00:01:59.861 gso: explicitly disabled via build config 00:01:59.861 ip_frag: explicitly disabled via build config 00:01:59.861 jobstats: explicitly disabled via build config 00:01:59.861 latencystats: explicitly disabled via build config 00:01:59.861 lpm: explicitly disabled via build config 00:01:59.861 member: explicitly disabled via build config 00:01:59.861 pcapng: explicitly disabled via build config 00:01:59.861 rawdev: explicitly disabled via build config 00:01:59.861 regexdev: explicitly disabled via build config 00:01:59.861 mldev: explicitly disabled via build config 00:01:59.861 rib: explicitly disabled via build config 00:01:59.861 sched: explicitly disabled via build config 00:01:59.861 stack: explicitly disabled via build config 00:01:59.861 ipsec: explicitly disabled via build config 00:01:59.861 pdcp: explicitly disabled via build config 00:01:59.861 fib: explicitly disabled via build config 00:01:59.861 port: explicitly disabled via build config 00:01:59.861 pdump: explicitly disabled via build config 00:01:59.861 table: explicitly disabled via build config 00:01:59.861 pipeline: explicitly disabled via build config 00:01:59.861 graph: explicitly disabled via build config 00:01:59.861 node: explicitly disabled via build config 00:01:59.861 00:01:59.861 drivers: 00:01:59.861 common/cpt: not in enabled drivers build config 00:01:59.861 common/dpaax: not in enabled drivers build config 00:01:59.861 common/iavf: not in enabled drivers build config 00:01:59.861 common/idpf: not in enabled drivers build config 00:01:59.861 common/ionic: not in enabled drivers build config 00:01:59.861 common/mvep: not in enabled drivers build config 00:01:59.861 common/octeontx: not in enabled drivers build config 00:01:59.861 bus/auxiliary: not in enabled drivers build config 00:01:59.861 bus/cdx: not in enabled drivers build config 00:01:59.861 bus/dpaa: not in enabled drivers build config 00:01:59.862 bus/fslmc: not in enabled drivers build config 00:01:59.862 bus/ifpga: not in enabled drivers build config 00:01:59.862 bus/platform: not in enabled drivers build config 00:01:59.862 bus/uacce: not in enabled drivers build config 00:01:59.862 bus/vmbus: not in enabled drivers build config 00:01:59.862 common/cnxk: not in enabled drivers build config 00:01:59.862 common/mlx5: not in enabled drivers build config 00:01:59.862 common/nfp: not in enabled drivers build config 00:01:59.862 common/nitrox: not in enabled drivers build config 00:01:59.862 common/qat: not in enabled drivers build config 00:01:59.862 common/sfc_efx: not in enabled drivers build config 00:01:59.862 mempool/bucket: not in enabled drivers build config 00:01:59.862 mempool/cnxk: not in enabled drivers build config 00:01:59.862 mempool/dpaa: not in enabled drivers build config 00:01:59.862 mempool/dpaa2: not in enabled drivers build config 00:01:59.862 mempool/octeontx: not in enabled drivers build config 00:01:59.862 mempool/stack: not in enabled drivers build config 00:01:59.862 dma/cnxk: not in enabled drivers build config 00:01:59.862 dma/dpaa: not in enabled drivers build config 00:01:59.862 dma/dpaa2: not in enabled drivers build config 00:01:59.862 dma/hisilicon: not in enabled drivers build config 00:01:59.862 dma/idxd: not in enabled drivers build config 00:01:59.862 dma/ioat: not in enabled drivers build config 00:01:59.862 dma/skeleton: not in enabled drivers build config 00:01:59.862 net/af_packet: not in enabled drivers build config 00:01:59.862 net/af_xdp: not in enabled drivers build config 00:01:59.862 net/ark: not in enabled drivers build config 00:01:59.862 net/atlantic: not in enabled drivers build config 00:01:59.862 net/avp: not in enabled drivers build config 00:01:59.862 net/axgbe: not in enabled drivers build config 00:01:59.862 net/bnx2x: not in enabled drivers build config 00:01:59.862 net/bnxt: not in enabled drivers build config 00:01:59.862 net/bonding: not in enabled drivers build config 00:01:59.862 net/cnxk: not in enabled drivers build config 00:01:59.862 net/cpfl: not in enabled drivers build config 00:01:59.862 net/cxgbe: not in enabled drivers build config 00:01:59.862 net/dpaa: not in enabled drivers build config 00:01:59.862 net/dpaa2: not in enabled drivers build config 00:01:59.862 net/e1000: not in enabled drivers build config 00:01:59.862 net/ena: not in enabled drivers build config 00:01:59.862 net/enetc: not in enabled drivers build config 00:01:59.862 net/enetfec: not in enabled drivers build config 00:01:59.862 net/enic: not in enabled drivers build config 00:01:59.862 net/failsafe: not in enabled drivers build config 00:01:59.862 net/fm10k: not in enabled drivers build config 00:01:59.862 net/gve: not in enabled drivers build config 00:01:59.862 net/hinic: not in enabled drivers build config 00:01:59.862 net/hns3: not in enabled drivers build config 00:01:59.862 net/i40e: not in enabled drivers build config 00:01:59.862 net/iavf: not in enabled drivers build config 00:01:59.862 net/ice: not in enabled drivers build config 00:01:59.862 net/idpf: not in enabled drivers build config 00:01:59.862 net/igc: not in enabled drivers build config 00:01:59.862 net/ionic: not in enabled drivers build config 00:01:59.862 net/ipn3ke: not in enabled drivers build config 00:01:59.862 net/ixgbe: not in enabled drivers build config 00:01:59.862 net/mana: not in enabled drivers build config 00:01:59.862 net/memif: not in enabled drivers build config 00:01:59.862 net/mlx4: not in enabled drivers build config 00:01:59.862 net/mlx5: not in enabled drivers build config 00:01:59.862 net/mvneta: not in enabled drivers build config 00:01:59.862 net/mvpp2: not in enabled drivers build config 00:01:59.862 net/netvsc: not in enabled drivers build config 00:01:59.862 net/nfb: not in enabled drivers build config 00:01:59.862 net/nfp: not in enabled drivers build config 00:01:59.862 net/ngbe: not in enabled drivers build config 00:01:59.862 net/null: not in enabled drivers build config 00:01:59.862 net/octeontx: not in enabled drivers build config 00:01:59.862 net/octeon_ep: not in enabled drivers build config 00:01:59.862 net/pcap: not in enabled drivers build config 00:01:59.862 net/pfe: not in enabled drivers build config 00:01:59.862 net/qede: not in enabled drivers build config 00:01:59.862 net/ring: not in enabled drivers build config 00:01:59.862 net/sfc: not in enabled drivers build config 00:01:59.862 net/softnic: not in enabled drivers build config 00:01:59.862 net/tap: not in enabled drivers build config 00:01:59.862 net/thunderx: not in enabled drivers build config 00:01:59.862 net/txgbe: not in enabled drivers build config 00:01:59.862 net/vdev_netvsc: not in enabled drivers build config 00:01:59.862 net/vhost: not in enabled drivers build config 00:01:59.862 net/virtio: not in enabled drivers build config 00:01:59.862 net/vmxnet3: not in enabled drivers build config 00:01:59.862 raw/*: missing internal dependency, "rawdev" 00:01:59.862 crypto/armv8: not in enabled drivers build config 00:01:59.862 crypto/bcmfs: not in enabled drivers build config 00:01:59.862 crypto/caam_jr: not in enabled drivers build config 00:01:59.862 crypto/ccp: not in enabled drivers build config 00:01:59.862 crypto/cnxk: not in enabled drivers build config 00:01:59.862 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.862 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.862 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.862 crypto/mlx5: not in enabled drivers build config 00:01:59.862 crypto/mvsam: not in enabled drivers build config 00:01:59.862 crypto/nitrox: not in enabled drivers build config 00:01:59.862 crypto/null: not in enabled drivers build config 00:01:59.862 crypto/octeontx: not in enabled drivers build config 00:01:59.862 crypto/openssl: not in enabled drivers build config 00:01:59.862 crypto/scheduler: not in enabled drivers build config 00:01:59.862 crypto/uadk: not in enabled drivers build config 00:01:59.862 crypto/virtio: not in enabled drivers build config 00:01:59.862 compress/isal: not in enabled drivers build config 00:01:59.862 compress/mlx5: not in enabled drivers build config 00:01:59.862 compress/nitrox: not in enabled drivers build config 00:01:59.862 compress/octeontx: not in enabled drivers build config 00:01:59.862 compress/zlib: not in enabled drivers build config 00:01:59.862 regex/*: missing internal dependency, "regexdev" 00:01:59.862 ml/*: missing internal dependency, "mldev" 00:01:59.862 vdpa/ifc: not in enabled drivers build config 00:01:59.862 vdpa/mlx5: not in enabled drivers build config 00:01:59.862 vdpa/nfp: not in enabled drivers build config 00:01:59.862 vdpa/sfc: not in enabled drivers build config 00:01:59.862 event/*: missing internal dependency, "eventdev" 00:01:59.862 baseband/*: missing internal dependency, "bbdev" 00:01:59.862 gpu/*: missing internal dependency, "gpudev" 00:01:59.862 00:01:59.862 00:01:59.862 Build targets in project: 85 00:01:59.862 00:01:59.862 DPDK 24.03.0 00:01:59.862 00:01:59.862 User defined options 00:01:59.862 buildtype : debug 00:01:59.862 default_library : shared 00:01:59.862 libdir : lib 00:01:59.862 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.862 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.862 c_link_args : 00:01:59.862 cpu_instruction_set: native 00:01:59.862 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:59.862 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:59.862 enable_docs : false 00:01:59.862 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.862 enable_kmods : false 00:01:59.862 max_lcores : 128 00:01:59.862 tests : false 00:01:59.862 00:01:59.862 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.862 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:00.120 [1/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.120 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.120 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.120 [4/268] Linking static target lib/librte_kvargs.a 00:02:00.120 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.120 [6/268] Linking static target lib/librte_log.a 00:02:00.686 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.686 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.686 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.686 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.686 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.944 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.944 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.944 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.944 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.944 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.944 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.944 [18/268] Linking static target lib/librte_telemetry.a 00:02:01.202 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.202 [20/268] Linking target lib/librte_log.so.24.1 00:02:01.461 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.461 [22/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.461 [23/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.719 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.719 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.719 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.719 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.719 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.719 [29/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.977 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.977 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.977 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.978 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.978 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.978 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.236 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.494 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.494 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.494 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.494 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.494 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.494 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.494 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.752 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.752 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.752 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.010 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.010 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.268 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.268 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.268 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.268 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.268 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.526 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.784 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.784 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.784 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.042 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:04.042 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:04.042 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:04.042 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.042 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.042 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:04.300 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.559 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.559 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.817 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:04.817 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.817 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.817 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.075 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.075 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.075 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.075 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.075 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.334 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.334 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.334 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.592 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.592 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.850 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.850 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.850 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.850 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.108 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.108 [86/268] Linking static target lib/librte_eal.a 00:02:06.108 [87/268] Linking static target lib/librte_ring.a 00:02:06.108 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.365 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.366 [90/268] Linking static target lib/librte_rcu.a 00:02:06.366 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.623 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.623 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.623 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.623 [95/268] Linking static target lib/librte_mempool.a 00:02:06.623 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.881 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.881 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.139 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.139 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:07.139 [101/268] Linking static target lib/librte_mbuf.a 00:02:07.139 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.139 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.396 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.396 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.396 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.654 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.654 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.940 [109/268] Linking static target lib/librte_net.a 00:02:07.940 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.940 [111/268] Linking static target lib/librte_meter.a 00:02:07.940 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.239 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.239 [114/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.239 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.239 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.239 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.496 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.496 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.754 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.754 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.012 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.012 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.270 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.270 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.270 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.270 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.270 [128/268] Linking static target lib/librte_pci.a 00:02:09.527 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.527 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.527 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.527 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.527 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.784 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.784 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:09.784 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.784 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.784 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.784 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.784 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.784 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.784 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.784 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:10.041 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:10.041 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.041 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:10.042 [147/268] Linking static target lib/librte_ethdev.a 00:02:10.042 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.042 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.042 [150/268] Linking static target lib/librte_cmdline.a 00:02:10.300 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.558 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.558 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.815 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:10.815 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.815 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.815 [157/268] Linking static target lib/librte_hash.a 00:02:10.815 [158/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.815 [159/268] Linking static target lib/librte_timer.a 00:02:11.073 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.073 [161/268] Linking static target lib/librte_compressdev.a 00:02:11.073 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.331 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.590 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.590 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.590 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.590 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.848 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.849 [169/268] Linking static target lib/librte_dmadev.a 00:02:11.849 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.849 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.849 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.849 [173/268] Linking static target lib/librte_cryptodev.a 00:02:11.849 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.107 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.107 [176/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.107 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.107 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.674 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.674 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.674 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.674 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.674 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.674 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.674 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.674 [186/268] Linking static target lib/librte_power.a 00:02:12.932 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.932 [188/268] Linking static target lib/librte_reorder.a 00:02:13.191 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.191 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.191 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.449 [192/268] Linking static target lib/librte_security.a 00:02:13.449 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.449 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.707 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.965 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.965 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.965 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.965 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.965 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.238 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.496 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.496 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.496 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.496 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.496 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.754 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.754 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:15.011 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.011 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.011 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.012 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.012 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.012 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.012 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.012 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.269 [217/268] Linking static target drivers/librte_bus_pci.a 00:02:15.269 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.269 [219/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.269 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.269 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.269 [222/268] Linking static target drivers/librte_bus_vdev.a 00:02:15.269 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.269 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.269 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.269 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:15.269 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.526 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.091 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:16.091 [230/268] Linking static target lib/librte_vhost.a 00:02:17.023 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.023 [232/268] Linking target lib/librte_eal.so.24.1 00:02:17.023 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.023 [234/268] Linking target lib/librte_meter.so.24.1 00:02:17.023 [235/268] Linking target lib/librte_pci.so.24.1 00:02:17.023 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:17.023 [237/268] Linking target lib/librte_timer.so.24.1 00:02:17.023 [238/268] Linking target lib/librte_ring.so.24.1 00:02:17.023 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.281 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.281 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.281 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.281 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.281 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.281 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.281 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:17.281 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:17.540 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:17.540 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.540 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.540 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:17.540 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:17.540 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.797 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:17.797 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:17.797 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:17.798 [257/268] Linking target lib/librte_net.so.24.1 00:02:17.798 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:17.798 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:17.798 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:18.056 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:18.056 [262/268] Linking target lib/librte_hash.so.24.1 00:02:18.056 [263/268] Linking target lib/librte_security.so.24.1 00:02:18.056 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:18.056 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.056 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.314 [267/268] Linking target lib/librte_power.so.24.1 00:02:18.314 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:18.314 INFO: autodetecting backend as ninja 00:02:18.314 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:19.686 CC lib/ut/ut.o 00:02:19.686 CC lib/log/log.o 00:02:19.686 CC lib/log/log_flags.o 00:02:19.686 CC lib/log/log_deprecated.o 00:02:19.686 CC lib/ut_mock/mock.o 00:02:19.686 LIB libspdk_ut.a 00:02:19.686 LIB libspdk_log.a 00:02:19.686 LIB libspdk_ut_mock.a 00:02:19.686 SO libspdk_ut.so.2.0 00:02:19.686 SO libspdk_ut_mock.so.6.0 00:02:19.686 SO libspdk_log.so.7.0 00:02:19.686 SYMLINK libspdk_ut.so 00:02:19.686 SYMLINK libspdk_ut_mock.so 00:02:19.686 SYMLINK libspdk_log.so 00:02:19.943 CC lib/ioat/ioat.o 00:02:19.943 CXX lib/trace_parser/trace.o 00:02:19.943 CC lib/util/base64.o 00:02:19.943 CC lib/util/bit_array.o 00:02:19.944 CC lib/util/cpuset.o 00:02:19.944 CC lib/util/crc16.o 00:02:19.944 CC lib/util/crc32.o 00:02:19.944 CC lib/util/crc32c.o 00:02:19.944 CC lib/dma/dma.o 00:02:19.944 CC lib/vfio_user/host/vfio_user_pci.o 00:02:19.944 CC lib/util/crc32_ieee.o 00:02:19.944 CC lib/util/crc64.o 00:02:19.944 CC lib/util/dif.o 00:02:20.201 CC lib/util/fd.o 00:02:20.201 LIB libspdk_dma.a 00:02:20.201 CC lib/vfio_user/host/vfio_user.o 00:02:20.201 SO libspdk_dma.so.4.0 00:02:20.201 CC lib/util/file.o 00:02:20.201 CC lib/util/hexlify.o 00:02:20.201 SYMLINK libspdk_dma.so 00:02:20.201 CC lib/util/iov.o 00:02:20.201 LIB libspdk_ioat.a 00:02:20.201 CC lib/util/math.o 00:02:20.201 CC lib/util/pipe.o 00:02:20.201 SO libspdk_ioat.so.7.0 00:02:20.201 CC lib/util/strerror_tls.o 00:02:20.201 CC lib/util/string.o 00:02:20.201 SYMLINK libspdk_ioat.so 00:02:20.459 CC lib/util/uuid.o 00:02:20.459 LIB libspdk_vfio_user.a 00:02:20.459 CC lib/util/fd_group.o 00:02:20.459 SO libspdk_vfio_user.so.5.0 00:02:20.459 CC lib/util/xor.o 00:02:20.459 CC lib/util/zipf.o 00:02:20.459 SYMLINK libspdk_vfio_user.so 00:02:20.717 LIB libspdk_util.a 00:02:20.717 SO libspdk_util.so.9.1 00:02:20.974 SYMLINK libspdk_util.so 00:02:20.975 LIB libspdk_trace_parser.a 00:02:20.975 SO libspdk_trace_parser.so.5.0 00:02:20.975 CC lib/conf/conf.o 00:02:20.975 CC lib/idxd/idxd.o 00:02:20.975 CC lib/idxd/idxd_user.o 00:02:20.975 CC lib/idxd/idxd_kernel.o 00:02:20.975 CC lib/rdma_provider/common.o 00:02:20.975 CC lib/json/json_parse.o 00:02:20.975 CC lib/rdma_utils/rdma_utils.o 00:02:20.975 CC lib/env_dpdk/env.o 00:02:20.975 CC lib/vmd/vmd.o 00:02:20.975 SYMLINK libspdk_trace_parser.so 00:02:20.975 CC lib/json/json_util.o 00:02:21.232 CC lib/json/json_write.o 00:02:21.232 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:21.232 CC lib/vmd/led.o 00:02:21.232 LIB libspdk_conf.a 00:02:21.232 CC lib/env_dpdk/memory.o 00:02:21.232 SO libspdk_conf.so.6.0 00:02:21.232 LIB libspdk_rdma_utils.a 00:02:21.232 CC lib/env_dpdk/pci.o 00:02:21.490 SO libspdk_rdma_utils.so.1.0 00:02:21.490 SYMLINK libspdk_conf.so 00:02:21.490 CC lib/env_dpdk/init.o 00:02:21.490 SYMLINK libspdk_rdma_utils.so 00:02:21.490 CC lib/env_dpdk/threads.o 00:02:21.490 CC lib/env_dpdk/pci_ioat.o 00:02:21.490 LIB libspdk_rdma_provider.a 00:02:21.490 SO libspdk_rdma_provider.so.6.0 00:02:21.490 LIB libspdk_json.a 00:02:21.490 SO libspdk_json.so.6.0 00:02:21.490 LIB libspdk_idxd.a 00:02:21.490 SYMLINK libspdk_rdma_provider.so 00:02:21.490 CC lib/env_dpdk/pci_virtio.o 00:02:21.490 CC lib/env_dpdk/pci_vmd.o 00:02:21.802 SYMLINK libspdk_json.so 00:02:21.802 CC lib/env_dpdk/pci_idxd.o 00:02:21.802 SO libspdk_idxd.so.12.0 00:02:21.802 LIB libspdk_vmd.a 00:02:21.802 SYMLINK libspdk_idxd.so 00:02:21.802 CC lib/env_dpdk/pci_event.o 00:02:21.802 CC lib/env_dpdk/sigbus_handler.o 00:02:21.802 SO libspdk_vmd.so.6.0 00:02:21.802 CC lib/env_dpdk/pci_dpdk.o 00:02:21.802 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:21.802 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.802 SYMLINK libspdk_vmd.so 00:02:21.803 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.803 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.803 CC lib/jsonrpc/jsonrpc_client.o 00:02:21.803 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:22.060 LIB libspdk_jsonrpc.a 00:02:22.060 SO libspdk_jsonrpc.so.6.0 00:02:22.317 SYMLINK libspdk_jsonrpc.so 00:02:22.576 CC lib/rpc/rpc.o 00:02:22.576 LIB libspdk_env_dpdk.a 00:02:22.576 SO libspdk_env_dpdk.so.14.1 00:02:22.576 LIB libspdk_rpc.a 00:02:22.843 SO libspdk_rpc.so.6.0 00:02:22.843 SYMLINK libspdk_env_dpdk.so 00:02:22.843 SYMLINK libspdk_rpc.so 00:02:23.101 CC lib/notify/notify.o 00:02:23.101 CC lib/notify/notify_rpc.o 00:02:23.101 CC lib/trace/trace.o 00:02:23.101 CC lib/trace/trace_rpc.o 00:02:23.101 CC lib/trace/trace_flags.o 00:02:23.101 CC lib/keyring/keyring_rpc.o 00:02:23.101 CC lib/keyring/keyring.o 00:02:23.101 LIB libspdk_notify.a 00:02:23.359 SO libspdk_notify.so.6.0 00:02:23.359 LIB libspdk_keyring.a 00:02:23.359 SYMLINK libspdk_notify.so 00:02:23.359 SO libspdk_keyring.so.1.0 00:02:23.359 LIB libspdk_trace.a 00:02:23.359 SYMLINK libspdk_keyring.so 00:02:23.359 SO libspdk_trace.so.10.0 00:02:23.359 SYMLINK libspdk_trace.so 00:02:23.616 CC lib/sock/sock.o 00:02:23.617 CC lib/sock/sock_rpc.o 00:02:23.617 CC lib/thread/thread.o 00:02:23.617 CC lib/thread/iobuf.o 00:02:24.183 LIB libspdk_sock.a 00:02:24.183 SO libspdk_sock.so.10.0 00:02:24.441 SYMLINK libspdk_sock.so 00:02:24.700 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:24.700 CC lib/nvme/nvme_fabric.o 00:02:24.700 CC lib/nvme/nvme_ctrlr.o 00:02:24.700 CC lib/nvme/nvme_ns_cmd.o 00:02:24.700 CC lib/nvme/nvme_ns.o 00:02:24.700 CC lib/nvme/nvme_pcie.o 00:02:24.700 CC lib/nvme/nvme_pcie_common.o 00:02:24.700 CC lib/nvme/nvme_qpair.o 00:02:24.700 CC lib/nvme/nvme.o 00:02:25.265 CC lib/nvme/nvme_quirks.o 00:02:25.523 CC lib/nvme/nvme_transport.o 00:02:25.523 CC lib/nvme/nvme_discovery.o 00:02:25.523 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.523 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.523 LIB libspdk_thread.a 00:02:25.523 SO libspdk_thread.so.10.1 00:02:25.523 CC lib/nvme/nvme_tcp.o 00:02:25.781 CC lib/nvme/nvme_opal.o 00:02:25.782 SYMLINK libspdk_thread.so 00:02:25.782 CC lib/nvme/nvme_io_msg.o 00:02:25.782 CC lib/accel/accel.o 00:02:25.782 CC lib/nvme/nvme_poll_group.o 00:02:26.040 CC lib/nvme/nvme_zns.o 00:02:26.322 CC lib/accel/accel_rpc.o 00:02:26.322 CC lib/accel/accel_sw.o 00:02:26.322 CC lib/nvme/nvme_stubs.o 00:02:26.322 CC lib/blob/blobstore.o 00:02:26.322 CC lib/blob/request.o 00:02:26.322 CC lib/nvme/nvme_auth.o 00:02:26.580 CC lib/nvme/nvme_cuse.o 00:02:26.580 CC lib/blob/zeroes.o 00:02:26.580 CC lib/blob/blob_bs_dev.o 00:02:26.838 CC lib/nvme/nvme_rdma.o 00:02:26.838 CC lib/init/json_config.o 00:02:26.838 CC lib/init/subsystem.o 00:02:26.838 CC lib/virtio/virtio.o 00:02:26.838 LIB libspdk_accel.a 00:02:27.097 SO libspdk_accel.so.15.1 00:02:27.097 SYMLINK libspdk_accel.so 00:02:27.097 CC lib/init/subsystem_rpc.o 00:02:27.097 CC lib/init/rpc.o 00:02:27.097 CC lib/virtio/virtio_vhost_user.o 00:02:27.355 CC lib/virtio/virtio_vfio_user.o 00:02:27.355 CC lib/virtio/virtio_pci.o 00:02:27.355 LIB libspdk_init.a 00:02:27.355 CC lib/bdev/bdev_rpc.o 00:02:27.355 CC lib/bdev/bdev.o 00:02:27.355 CC lib/bdev/bdev_zone.o 00:02:27.355 SO libspdk_init.so.5.0 00:02:27.355 SYMLINK libspdk_init.so 00:02:27.355 CC lib/bdev/part.o 00:02:27.355 CC lib/bdev/scsi_nvme.o 00:02:27.613 LIB libspdk_virtio.a 00:02:27.613 SO libspdk_virtio.so.7.0 00:02:27.613 CC lib/event/reactor.o 00:02:27.613 CC lib/event/app.o 00:02:27.613 CC lib/event/log_rpc.o 00:02:27.613 CC lib/event/app_rpc.o 00:02:27.613 CC lib/event/scheduler_static.o 00:02:27.613 SYMLINK libspdk_virtio.so 00:02:28.178 LIB libspdk_event.a 00:02:28.178 SO libspdk_event.so.14.0 00:02:28.178 LIB libspdk_nvme.a 00:02:28.178 SYMLINK libspdk_event.so 00:02:28.436 SO libspdk_nvme.so.13.1 00:02:28.693 SYMLINK libspdk_nvme.so 00:02:29.259 LIB libspdk_blob.a 00:02:29.517 SO libspdk_blob.so.11.0 00:02:29.517 SYMLINK libspdk_blob.so 00:02:29.783 CC lib/blobfs/blobfs.o 00:02:29.783 CC lib/blobfs/tree.o 00:02:29.783 CC lib/lvol/lvol.o 00:02:30.045 LIB libspdk_bdev.a 00:02:30.045 SO libspdk_bdev.so.15.1 00:02:30.304 SYMLINK libspdk_bdev.so 00:02:30.561 CC lib/scsi/dev.o 00:02:30.561 CC lib/scsi/lun.o 00:02:30.561 CC lib/scsi/scsi.o 00:02:30.561 CC lib/scsi/port.o 00:02:30.561 CC lib/ftl/ftl_core.o 00:02:30.561 CC lib/nvmf/ctrlr.o 00:02:30.561 CC lib/ublk/ublk.o 00:02:30.561 CC lib/nbd/nbd.o 00:02:30.561 LIB libspdk_blobfs.a 00:02:30.561 CC lib/nbd/nbd_rpc.o 00:02:30.561 SO libspdk_blobfs.so.10.0 00:02:30.818 CC lib/ftl/ftl_init.o 00:02:30.818 SYMLINK libspdk_blobfs.so 00:02:30.818 CC lib/ftl/ftl_layout.o 00:02:30.818 CC lib/scsi/scsi_bdev.o 00:02:30.818 LIB libspdk_lvol.a 00:02:30.818 CC lib/ublk/ublk_rpc.o 00:02:30.818 CC lib/ftl/ftl_debug.o 00:02:30.818 SO libspdk_lvol.so.10.0 00:02:31.075 CC lib/scsi/scsi_pr.o 00:02:31.076 CC lib/nvmf/ctrlr_discovery.o 00:02:31.076 LIB libspdk_nbd.a 00:02:31.076 SYMLINK libspdk_lvol.so 00:02:31.076 CC lib/nvmf/ctrlr_bdev.o 00:02:31.076 SO libspdk_nbd.so.7.0 00:02:31.076 CC lib/scsi/scsi_rpc.o 00:02:31.076 SYMLINK libspdk_nbd.so 00:02:31.076 CC lib/scsi/task.o 00:02:31.076 CC lib/ftl/ftl_io.o 00:02:31.076 CC lib/ftl/ftl_sb.o 00:02:31.076 LIB libspdk_ublk.a 00:02:31.333 CC lib/ftl/ftl_l2p.o 00:02:31.333 SO libspdk_ublk.so.3.0 00:02:31.333 CC lib/ftl/ftl_l2p_flat.o 00:02:31.333 CC lib/ftl/ftl_nv_cache.o 00:02:31.333 SYMLINK libspdk_ublk.so 00:02:31.333 CC lib/ftl/ftl_band.o 00:02:31.333 LIB libspdk_scsi.a 00:02:31.333 CC lib/ftl/ftl_band_ops.o 00:02:31.333 SO libspdk_scsi.so.9.0 00:02:31.333 CC lib/nvmf/subsystem.o 00:02:31.333 CC lib/ftl/ftl_writer.o 00:02:31.593 CC lib/ftl/ftl_rq.o 00:02:31.593 SYMLINK libspdk_scsi.so 00:02:31.593 CC lib/ftl/ftl_reloc.o 00:02:31.593 CC lib/nvmf/nvmf.o 00:02:31.593 CC lib/iscsi/conn.o 00:02:31.593 CC lib/nvmf/nvmf_rpc.o 00:02:31.593 CC lib/ftl/ftl_l2p_cache.o 00:02:31.852 CC lib/ftl/ftl_p2l.o 00:02:31.852 CC lib/ftl/mngt/ftl_mngt.o 00:02:31.852 CC lib/vhost/vhost.o 00:02:32.111 CC lib/vhost/vhost_rpc.o 00:02:32.111 CC lib/vhost/vhost_scsi.o 00:02:32.111 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.369 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.369 CC lib/iscsi/init_grp.o 00:02:32.369 CC lib/vhost/vhost_blk.o 00:02:32.369 CC lib/iscsi/iscsi.o 00:02:32.369 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.369 CC lib/nvmf/transport.o 00:02:32.369 CC lib/vhost/rte_vhost_user.o 00:02:32.627 CC lib/iscsi/md5.o 00:02:32.627 CC lib/nvmf/tcp.o 00:02:32.627 CC lib/nvmf/stubs.o 00:02:32.627 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.627 CC lib/iscsi/param.o 00:02:32.885 CC lib/nvmf/mdns_server.o 00:02:32.885 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.143 CC lib/nvmf/rdma.o 00:02:33.143 CC lib/nvmf/auth.o 00:02:33.143 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.143 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.143 CC lib/iscsi/portal_grp.o 00:02:33.401 CC lib/iscsi/tgt_node.o 00:02:33.401 CC lib/iscsi/iscsi_subsystem.o 00:02:33.401 CC lib/iscsi/iscsi_rpc.o 00:02:33.401 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:33.401 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:33.657 LIB libspdk_vhost.a 00:02:33.657 CC lib/iscsi/task.o 00:02:33.657 SO libspdk_vhost.so.8.0 00:02:33.657 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:33.657 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:33.657 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:33.657 SYMLINK libspdk_vhost.so 00:02:33.657 CC lib/ftl/utils/ftl_conf.o 00:02:33.657 CC lib/ftl/utils/ftl_md.o 00:02:33.657 CC lib/ftl/utils/ftl_mempool.o 00:02:33.915 LIB libspdk_iscsi.a 00:02:33.915 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.915 SO libspdk_iscsi.so.8.0 00:02:33.915 CC lib/ftl/utils/ftl_property.o 00:02:33.915 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:33.915 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:33.915 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:33.916 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:34.174 SYMLINK libspdk_iscsi.so 00:02:34.174 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:34.174 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:34.174 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:34.174 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:34.174 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:34.174 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:34.174 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:34.174 CC lib/ftl/base/ftl_base_dev.o 00:02:34.174 CC lib/ftl/base/ftl_base_bdev.o 00:02:34.174 CC lib/ftl/ftl_trace.o 00:02:34.431 LIB libspdk_ftl.a 00:02:34.688 SO libspdk_ftl.so.9.0 00:02:35.254 LIB libspdk_nvmf.a 00:02:35.254 SYMLINK libspdk_ftl.so 00:02:35.254 SO libspdk_nvmf.so.18.1 00:02:35.512 SYMLINK libspdk_nvmf.so 00:02:35.770 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.770 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.770 CC module/blob/bdev/blob_bdev.o 00:02:35.770 CC module/accel/dsa/accel_dsa.o 00:02:35.770 CC module/accel/error/accel_error.o 00:02:35.770 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.770 CC module/accel/ioat/accel_ioat.o 00:02:35.770 CC module/keyring/file/keyring.o 00:02:35.770 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.770 CC module/sock/posix/posix.o 00:02:36.028 LIB libspdk_env_dpdk_rpc.a 00:02:36.028 SO libspdk_env_dpdk_rpc.so.6.0 00:02:36.028 CC module/keyring/file/keyring_rpc.o 00:02:36.028 LIB libspdk_scheduler_gscheduler.a 00:02:36.028 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.028 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.028 CC module/accel/error/accel_error_rpc.o 00:02:36.028 SO libspdk_scheduler_gscheduler.so.4.0 00:02:36.028 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.028 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:36.028 LIB libspdk_scheduler_dynamic.a 00:02:36.028 SO libspdk_scheduler_dynamic.so.4.0 00:02:36.028 SYMLINK libspdk_scheduler_gscheduler.so 00:02:36.028 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.028 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.286 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.286 LIB libspdk_blob_bdev.a 00:02:36.286 LIB libspdk_keyring_file.a 00:02:36.286 LIB libspdk_accel_error.a 00:02:36.286 LIB libspdk_accel_ioat.a 00:02:36.286 SO libspdk_blob_bdev.so.11.0 00:02:36.286 SO libspdk_keyring_file.so.1.0 00:02:36.286 SO libspdk_accel_error.so.2.0 00:02:36.286 SO libspdk_accel_ioat.so.6.0 00:02:36.286 SYMLINK libspdk_blob_bdev.so 00:02:36.286 SYMLINK libspdk_keyring_file.so 00:02:36.286 LIB libspdk_accel_dsa.a 00:02:36.286 SYMLINK libspdk_accel_error.so 00:02:36.286 SYMLINK libspdk_accel_ioat.so 00:02:36.286 CC module/sock/uring/uring.o 00:02:36.286 CC module/accel/iaa/accel_iaa.o 00:02:36.286 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.286 SO libspdk_accel_dsa.so.5.0 00:02:36.286 CC module/keyring/linux/keyring.o 00:02:36.286 CC module/keyring/linux/keyring_rpc.o 00:02:36.286 SYMLINK libspdk_accel_dsa.so 00:02:36.543 CC module/bdev/error/vbdev_error.o 00:02:36.543 LIB libspdk_keyring_linux.a 00:02:36.543 LIB libspdk_accel_iaa.a 00:02:36.543 CC module/bdev/delay/vbdev_delay.o 00:02:36.543 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.543 CC module/bdev/gpt/gpt.o 00:02:36.543 SO libspdk_keyring_linux.so.1.0 00:02:36.543 SO libspdk_accel_iaa.so.3.0 00:02:36.543 LIB libspdk_sock_posix.a 00:02:36.543 SYMLINK libspdk_keyring_linux.so 00:02:36.544 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.544 SYMLINK libspdk_accel_iaa.so 00:02:36.544 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.544 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.544 CC module/bdev/malloc/bdev_malloc.o 00:02:36.544 SO libspdk_sock_posix.so.6.0 00:02:36.800 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.800 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.800 SYMLINK libspdk_sock_posix.so 00:02:36.800 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.800 LIB libspdk_bdev_error.a 00:02:36.800 SO libspdk_bdev_error.so.6.0 00:02:36.800 LIB libspdk_bdev_delay.a 00:02:36.800 LIB libspdk_blobfs_bdev.a 00:02:37.057 SO libspdk_bdev_delay.so.6.0 00:02:37.057 CC module/bdev/null/bdev_null.o 00:02:37.057 SYMLINK libspdk_bdev_error.so 00:02:37.057 CC module/bdev/null/bdev_null_rpc.o 00:02:37.057 SO libspdk_blobfs_bdev.so.6.0 00:02:37.057 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:37.057 LIB libspdk_sock_uring.a 00:02:37.057 SYMLINK libspdk_bdev_delay.so 00:02:37.057 SO libspdk_sock_uring.so.5.0 00:02:37.057 CC module/bdev/nvme/bdev_nvme.o 00:02:37.057 LIB libspdk_bdev_malloc.a 00:02:37.057 LIB libspdk_bdev_gpt.a 00:02:37.057 SYMLINK libspdk_blobfs_bdev.so 00:02:37.057 SO libspdk_bdev_malloc.so.6.0 00:02:37.057 SO libspdk_bdev_gpt.so.6.0 00:02:37.057 SYMLINK libspdk_sock_uring.so 00:02:37.057 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:37.057 SYMLINK libspdk_bdev_malloc.so 00:02:37.057 CC module/bdev/nvme/nvme_rpc.o 00:02:37.057 CC module/bdev/nvme/bdev_mdns_client.o 00:02:37.057 SYMLINK libspdk_bdev_gpt.so 00:02:37.057 CC module/bdev/nvme/vbdev_opal.o 00:02:37.057 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.336 CC module/bdev/raid/bdev_raid.o 00:02:37.336 LIB libspdk_bdev_null.a 00:02:37.336 SO libspdk_bdev_null.so.6.0 00:02:37.336 LIB libspdk_bdev_lvol.a 00:02:37.336 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.336 SYMLINK libspdk_bdev_null.so 00:02:37.336 CC module/bdev/raid/bdev_raid_sb.o 00:02:37.336 CC module/bdev/split/vbdev_split.o 00:02:37.336 SO libspdk_bdev_lvol.so.6.0 00:02:37.336 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:37.336 SYMLINK libspdk_bdev_lvol.so 00:02:37.336 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:37.594 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:37.594 CC module/bdev/split/vbdev_split_rpc.o 00:02:37.594 CC module/bdev/raid/raid0.o 00:02:37.594 LIB libspdk_bdev_passthru.a 00:02:37.594 CC module/bdev/raid/raid1.o 00:02:37.594 SO libspdk_bdev_passthru.so.6.0 00:02:37.594 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:37.594 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:37.852 CC module/bdev/raid/concat.o 00:02:37.852 SYMLINK libspdk_bdev_passthru.so 00:02:37.852 LIB libspdk_bdev_split.a 00:02:37.852 CC module/bdev/uring/bdev_uring.o 00:02:37.852 SO libspdk_bdev_split.so.6.0 00:02:37.852 CC module/bdev/uring/bdev_uring_rpc.o 00:02:37.852 SYMLINK libspdk_bdev_split.so 00:02:37.852 CC module/bdev/aio/bdev_aio.o 00:02:37.852 CC module/bdev/aio/bdev_aio_rpc.o 00:02:38.110 LIB libspdk_bdev_zone_block.a 00:02:38.110 SO libspdk_bdev_zone_block.so.6.0 00:02:38.110 CC module/bdev/ftl/bdev_ftl.o 00:02:38.110 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:38.110 CC module/bdev/iscsi/bdev_iscsi.o 00:02:38.110 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:38.110 SYMLINK libspdk_bdev_zone_block.so 00:02:38.110 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:38.110 LIB libspdk_bdev_uring.a 00:02:38.110 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:38.110 SO libspdk_bdev_uring.so.6.0 00:02:38.110 LIB libspdk_bdev_raid.a 00:02:38.368 SYMLINK libspdk_bdev_uring.so 00:02:38.368 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:38.368 SO libspdk_bdev_raid.so.6.0 00:02:38.368 LIB libspdk_bdev_aio.a 00:02:38.368 SO libspdk_bdev_aio.so.6.0 00:02:38.368 SYMLINK libspdk_bdev_raid.so 00:02:38.368 LIB libspdk_bdev_ftl.a 00:02:38.368 SYMLINK libspdk_bdev_aio.so 00:02:38.368 SO libspdk_bdev_ftl.so.6.0 00:02:38.368 LIB libspdk_bdev_iscsi.a 00:02:38.627 SYMLINK libspdk_bdev_ftl.so 00:02:38.627 SO libspdk_bdev_iscsi.so.6.0 00:02:38.627 SYMLINK libspdk_bdev_iscsi.so 00:02:38.627 LIB libspdk_bdev_virtio.a 00:02:38.627 SO libspdk_bdev_virtio.so.6.0 00:02:38.885 SYMLINK libspdk_bdev_virtio.so 00:02:39.143 LIB libspdk_bdev_nvme.a 00:02:39.401 SO libspdk_bdev_nvme.so.7.0 00:02:39.401 SYMLINK libspdk_bdev_nvme.so 00:02:39.967 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.967 CC module/event/subsystems/vmd/vmd.o 00:02:39.967 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.967 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.967 CC module/event/subsystems/keyring/keyring.o 00:02:39.967 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.967 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.967 CC module/event/subsystems/sock/sock.o 00:02:39.967 LIB libspdk_event_scheduler.a 00:02:39.967 LIB libspdk_event_vmd.a 00:02:40.225 LIB libspdk_event_keyring.a 00:02:40.225 LIB libspdk_event_vhost_blk.a 00:02:40.225 SO libspdk_event_scheduler.so.4.0 00:02:40.225 SO libspdk_event_vmd.so.6.0 00:02:40.225 LIB libspdk_event_sock.a 00:02:40.225 LIB libspdk_event_iobuf.a 00:02:40.225 SO libspdk_event_vhost_blk.so.3.0 00:02:40.225 SO libspdk_event_keyring.so.1.0 00:02:40.225 SO libspdk_event_sock.so.5.0 00:02:40.225 SO libspdk_event_iobuf.so.3.0 00:02:40.225 SYMLINK libspdk_event_scheduler.so 00:02:40.225 SYMLINK libspdk_event_keyring.so 00:02:40.225 SYMLINK libspdk_event_vhost_blk.so 00:02:40.225 SYMLINK libspdk_event_vmd.so 00:02:40.225 SYMLINK libspdk_event_sock.so 00:02:40.225 SYMLINK libspdk_event_iobuf.so 00:02:40.483 CC module/event/subsystems/accel/accel.o 00:02:40.740 LIB libspdk_event_accel.a 00:02:40.740 SO libspdk_event_accel.so.6.0 00:02:40.740 SYMLINK libspdk_event_accel.so 00:02:40.999 CC module/event/subsystems/bdev/bdev.o 00:02:41.257 LIB libspdk_event_bdev.a 00:02:41.257 SO libspdk_event_bdev.so.6.0 00:02:41.257 SYMLINK libspdk_event_bdev.so 00:02:41.514 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.514 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.514 CC module/event/subsystems/ublk/ublk.o 00:02:41.514 CC module/event/subsystems/nbd/nbd.o 00:02:41.514 CC module/event/subsystems/scsi/scsi.o 00:02:41.772 LIB libspdk_event_ublk.a 00:02:41.772 LIB libspdk_event_nbd.a 00:02:41.772 SO libspdk_event_ublk.so.3.0 00:02:41.772 SO libspdk_event_nbd.so.6.0 00:02:41.772 LIB libspdk_event_scsi.a 00:02:41.772 SYMLINK libspdk_event_ublk.so 00:02:41.772 SO libspdk_event_scsi.so.6.0 00:02:41.772 LIB libspdk_event_nvmf.a 00:02:41.772 SYMLINK libspdk_event_nbd.so 00:02:41.772 SYMLINK libspdk_event_scsi.so 00:02:41.772 SO libspdk_event_nvmf.so.6.0 00:02:42.029 SYMLINK libspdk_event_nvmf.so 00:02:42.029 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.029 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.287 LIB libspdk_event_vhost_scsi.a 00:02:42.287 LIB libspdk_event_iscsi.a 00:02:42.287 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.287 SO libspdk_event_iscsi.so.6.0 00:02:42.594 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.594 SYMLINK libspdk_event_iscsi.so 00:02:42.594 SO libspdk.so.6.0 00:02:42.594 SYMLINK libspdk.so 00:02:42.924 CC app/spdk_lspci/spdk_lspci.o 00:02:42.924 CC app/trace_record/trace_record.o 00:02:42.924 CXX app/trace/trace.o 00:02:42.924 CC app/spdk_nvme_perf/perf.o 00:02:42.924 CC app/spdk_nvme_identify/identify.o 00:02:42.924 CC app/iscsi_tgt/iscsi_tgt.o 00:02:42.924 CC app/spdk_tgt/spdk_tgt.o 00:02:42.924 CC app/nvmf_tgt/nvmf_main.o 00:02:42.924 CC examples/util/zipf/zipf.o 00:02:43.182 LINK spdk_lspci 00:02:43.182 CC test/thread/poller_perf/poller_perf.o 00:02:43.182 LINK spdk_trace_record 00:02:43.182 LINK spdk_tgt 00:02:43.182 LINK nvmf_tgt 00:02:43.182 LINK zipf 00:02:43.182 LINK iscsi_tgt 00:02:43.182 LINK poller_perf 00:02:43.441 LINK spdk_trace 00:02:43.441 CC app/spdk_nvme_discover/discovery_aer.o 00:02:43.441 CC app/spdk_top/spdk_top.o 00:02:43.698 CC app/spdk_dd/spdk_dd.o 00:02:43.698 CC examples/ioat/perf/perf.o 00:02:43.698 LINK spdk_nvme_discover 00:02:43.698 CC test/dma/test_dma/test_dma.o 00:02:43.698 CC app/fio/nvme/fio_plugin.o 00:02:43.698 CC examples/vmd/lsvmd/lsvmd.o 00:02:43.698 CC test/app/bdev_svc/bdev_svc.o 00:02:43.698 LINK spdk_nvme_identify 00:02:43.698 LINK spdk_nvme_perf 00:02:43.698 LINK ioat_perf 00:02:43.955 LINK lsvmd 00:02:43.955 CC app/fio/bdev/fio_plugin.o 00:02:43.955 LINK bdev_svc 00:02:43.955 LINK spdk_dd 00:02:43.955 LINK test_dma 00:02:43.955 CC examples/vmd/led/led.o 00:02:43.955 CC examples/ioat/verify/verify.o 00:02:44.213 CC test/app/histogram_perf/histogram_perf.o 00:02:44.213 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:44.213 LINK spdk_nvme 00:02:44.213 LINK led 00:02:44.213 LINK histogram_perf 00:02:44.213 LINK verify 00:02:44.213 LINK spdk_top 00:02:44.470 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:44.470 CC examples/idxd/perf/perf.o 00:02:44.470 LINK spdk_bdev 00:02:44.470 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:44.470 CC examples/thread/thread/thread_ex.o 00:02:44.470 CC examples/sock/hello_world/hello_sock.o 00:02:44.470 CC test/app/jsoncat/jsoncat.o 00:02:44.470 CC test/app/stub/stub.o 00:02:44.470 LINK nvme_fuzz 00:02:44.727 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.727 LINK interrupt_tgt 00:02:44.727 LINK jsoncat 00:02:44.727 CC app/vhost/vhost.o 00:02:44.727 LINK stub 00:02:44.727 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.727 LINK idxd_perf 00:02:44.727 LINK hello_sock 00:02:44.727 LINK thread 00:02:44.986 TEST_HEADER include/spdk/accel.h 00:02:44.986 TEST_HEADER include/spdk/accel_module.h 00:02:44.986 TEST_HEADER include/spdk/assert.h 00:02:44.986 TEST_HEADER include/spdk/barrier.h 00:02:44.986 TEST_HEADER include/spdk/base64.h 00:02:44.986 TEST_HEADER include/spdk/bdev.h 00:02:44.986 TEST_HEADER include/spdk/bdev_module.h 00:02:44.986 TEST_HEADER include/spdk/bdev_zone.h 00:02:44.986 TEST_HEADER include/spdk/bit_array.h 00:02:44.986 TEST_HEADER include/spdk/bit_pool.h 00:02:44.986 TEST_HEADER include/spdk/blob_bdev.h 00:02:44.986 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:44.986 TEST_HEADER include/spdk/blobfs.h 00:02:44.986 TEST_HEADER include/spdk/blob.h 00:02:44.986 TEST_HEADER include/spdk/conf.h 00:02:44.986 TEST_HEADER include/spdk/config.h 00:02:44.986 TEST_HEADER include/spdk/cpuset.h 00:02:44.986 TEST_HEADER include/spdk/crc16.h 00:02:44.986 TEST_HEADER include/spdk/crc32.h 00:02:44.986 TEST_HEADER include/spdk/crc64.h 00:02:44.986 TEST_HEADER include/spdk/dif.h 00:02:44.986 TEST_HEADER include/spdk/dma.h 00:02:44.986 TEST_HEADER include/spdk/endian.h 00:02:44.986 TEST_HEADER include/spdk/env_dpdk.h 00:02:44.986 LINK vhost 00:02:44.986 TEST_HEADER include/spdk/env.h 00:02:44.986 TEST_HEADER include/spdk/event.h 00:02:44.986 TEST_HEADER include/spdk/fd_group.h 00:02:44.986 TEST_HEADER include/spdk/fd.h 00:02:44.986 TEST_HEADER include/spdk/file.h 00:02:44.986 TEST_HEADER include/spdk/ftl.h 00:02:44.986 TEST_HEADER include/spdk/gpt_spec.h 00:02:44.986 TEST_HEADER include/spdk/hexlify.h 00:02:44.986 TEST_HEADER include/spdk/histogram_data.h 00:02:44.986 TEST_HEADER include/spdk/idxd.h 00:02:44.986 TEST_HEADER include/spdk/idxd_spec.h 00:02:44.986 TEST_HEADER include/spdk/init.h 00:02:44.986 TEST_HEADER include/spdk/ioat.h 00:02:44.986 TEST_HEADER include/spdk/ioat_spec.h 00:02:44.986 TEST_HEADER include/spdk/iscsi_spec.h 00:02:44.986 TEST_HEADER include/spdk/json.h 00:02:44.986 TEST_HEADER include/spdk/jsonrpc.h 00:02:44.986 TEST_HEADER include/spdk/keyring.h 00:02:44.986 TEST_HEADER include/spdk/keyring_module.h 00:02:44.986 TEST_HEADER include/spdk/likely.h 00:02:44.986 TEST_HEADER include/spdk/log.h 00:02:44.986 TEST_HEADER include/spdk/lvol.h 00:02:44.986 TEST_HEADER include/spdk/memory.h 00:02:44.986 TEST_HEADER include/spdk/mmio.h 00:02:44.986 TEST_HEADER include/spdk/nbd.h 00:02:44.986 TEST_HEADER include/spdk/notify.h 00:02:44.986 TEST_HEADER include/spdk/nvme.h 00:02:44.986 TEST_HEADER include/spdk/nvme_intel.h 00:02:44.986 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:44.986 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:44.986 TEST_HEADER include/spdk/nvme_spec.h 00:02:44.986 TEST_HEADER include/spdk/nvme_zns.h 00:02:44.986 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:44.986 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:44.986 TEST_HEADER include/spdk/nvmf.h 00:02:44.986 TEST_HEADER include/spdk/nvmf_spec.h 00:02:44.986 TEST_HEADER include/spdk/nvmf_transport.h 00:02:44.986 TEST_HEADER include/spdk/opal.h 00:02:44.986 TEST_HEADER include/spdk/opal_spec.h 00:02:44.986 TEST_HEADER include/spdk/pci_ids.h 00:02:44.986 TEST_HEADER include/spdk/pipe.h 00:02:44.986 TEST_HEADER include/spdk/queue.h 00:02:44.986 TEST_HEADER include/spdk/reduce.h 00:02:44.986 TEST_HEADER include/spdk/rpc.h 00:02:44.986 TEST_HEADER include/spdk/scheduler.h 00:02:44.986 TEST_HEADER include/spdk/scsi.h 00:02:44.986 CC test/rpc_client/rpc_client_test.o 00:02:44.986 TEST_HEADER include/spdk/scsi_spec.h 00:02:44.986 TEST_HEADER include/spdk/sock.h 00:02:44.986 TEST_HEADER include/spdk/stdinc.h 00:02:44.986 CC test/event/event_perf/event_perf.o 00:02:44.986 TEST_HEADER include/spdk/string.h 00:02:44.986 TEST_HEADER include/spdk/thread.h 00:02:44.986 TEST_HEADER include/spdk/trace.h 00:02:44.986 TEST_HEADER include/spdk/trace_parser.h 00:02:44.986 CC test/event/reactor/reactor.o 00:02:44.986 TEST_HEADER include/spdk/tree.h 00:02:44.986 TEST_HEADER include/spdk/ublk.h 00:02:44.986 TEST_HEADER include/spdk/util.h 00:02:44.986 TEST_HEADER include/spdk/uuid.h 00:02:44.986 TEST_HEADER include/spdk/version.h 00:02:44.986 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:44.986 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:44.986 TEST_HEADER include/spdk/vhost.h 00:02:44.986 TEST_HEADER include/spdk/vmd.h 00:02:44.986 TEST_HEADER include/spdk/xor.h 00:02:44.986 TEST_HEADER include/spdk/zipf.h 00:02:44.986 CXX test/cpp_headers/accel.o 00:02:45.244 CC test/nvme/aer/aer.o 00:02:45.244 CC test/env/mem_callbacks/mem_callbacks.o 00:02:45.244 LINK event_perf 00:02:45.244 LINK reactor 00:02:45.244 LINK vhost_fuzz 00:02:45.244 CC examples/nvme/reconnect/reconnect.o 00:02:45.244 CC examples/nvme/hello_world/hello_world.o 00:02:45.244 CXX test/cpp_headers/accel_module.o 00:02:45.244 LINK rpc_client_test 00:02:45.501 CXX test/cpp_headers/assert.o 00:02:45.501 LINK aer 00:02:45.501 LINK hello_world 00:02:45.501 CC test/event/reactor_perf/reactor_perf.o 00:02:45.501 CC test/event/app_repeat/app_repeat.o 00:02:45.501 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:45.501 CXX test/cpp_headers/barrier.o 00:02:45.501 LINK reconnect 00:02:45.759 LINK reactor_perf 00:02:45.759 CXX test/cpp_headers/base64.o 00:02:45.759 CC test/accel/dif/dif.o 00:02:45.759 LINK app_repeat 00:02:45.759 CC test/nvme/reset/reset.o 00:02:45.759 LINK mem_callbacks 00:02:45.759 CXX test/cpp_headers/bdev.o 00:02:45.759 CC test/nvme/sgl/sgl.o 00:02:46.017 CC test/nvme/overhead/overhead.o 00:02:46.017 CC test/nvme/e2edp/nvme_dp.o 00:02:46.017 CC test/env/vtophys/vtophys.o 00:02:46.017 CC test/event/scheduler/scheduler.o 00:02:46.017 LINK reset 00:02:46.017 LINK nvme_manage 00:02:46.017 CXX test/cpp_headers/bdev_module.o 00:02:46.017 LINK vtophys 00:02:46.017 LINK iscsi_fuzz 00:02:46.017 LINK dif 00:02:46.017 LINK sgl 00:02:46.274 LINK overhead 00:02:46.274 LINK nvme_dp 00:02:46.274 LINK scheduler 00:02:46.274 CXX test/cpp_headers/bdev_zone.o 00:02:46.274 CC test/nvme/err_injection/err_injection.o 00:02:46.274 CC examples/nvme/arbitration/arbitration.o 00:02:46.531 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:46.532 CC test/nvme/startup/startup.o 00:02:46.532 CC test/nvme/reserve/reserve.o 00:02:46.532 CC test/nvme/simple_copy/simple_copy.o 00:02:46.532 CXX test/cpp_headers/bit_array.o 00:02:46.532 LINK err_injection 00:02:46.532 CC test/nvme/connect_stress/connect_stress.o 00:02:46.532 LINK env_dpdk_post_init 00:02:46.532 CC test/blobfs/mkfs/mkfs.o 00:02:46.789 LINK startup 00:02:46.789 CC test/lvol/esnap/esnap.o 00:02:46.789 LINK reserve 00:02:46.789 LINK arbitration 00:02:46.789 CXX test/cpp_headers/bit_pool.o 00:02:46.789 LINK connect_stress 00:02:46.789 LINK simple_copy 00:02:46.789 LINK mkfs 00:02:46.789 CC test/env/memory/memory_ut.o 00:02:47.046 CXX test/cpp_headers/blob_bdev.o 00:02:47.046 CC test/nvme/boot_partition/boot_partition.o 00:02:47.046 CC test/nvme/compliance/nvme_compliance.o 00:02:47.046 CC examples/nvme/hotplug/hotplug.o 00:02:47.046 CC test/bdev/bdevio/bdevio.o 00:02:47.046 CC test/nvme/fused_ordering/fused_ordering.o 00:02:47.046 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:47.046 CXX test/cpp_headers/blobfs_bdev.o 00:02:47.046 CXX test/cpp_headers/blobfs.o 00:02:47.046 LINK boot_partition 00:02:47.304 LINK hotplug 00:02:47.304 LINK doorbell_aers 00:02:47.304 CXX test/cpp_headers/blob.o 00:02:47.304 LINK fused_ordering 00:02:47.304 CXX test/cpp_headers/conf.o 00:02:47.304 LINK nvme_compliance 00:02:47.304 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:47.563 LINK bdevio 00:02:47.563 CXX test/cpp_headers/config.o 00:02:47.563 CXX test/cpp_headers/cpuset.o 00:02:47.563 CXX test/cpp_headers/crc16.o 00:02:47.563 CXX test/cpp_headers/crc32.o 00:02:47.563 CXX test/cpp_headers/crc64.o 00:02:47.563 CXX test/cpp_headers/dif.o 00:02:47.563 CC test/nvme/fdp/fdp.o 00:02:47.563 LINK cmb_copy 00:02:47.563 CXX test/cpp_headers/dma.o 00:02:47.563 CXX test/cpp_headers/endian.o 00:02:47.563 CXX test/cpp_headers/env_dpdk.o 00:02:47.563 CXX test/cpp_headers/env.o 00:02:47.836 CC test/env/pci/pci_ut.o 00:02:47.836 CC test/nvme/cuse/cuse.o 00:02:47.836 CXX test/cpp_headers/event.o 00:02:47.836 CC examples/nvme/abort/abort.o 00:02:47.836 LINK fdp 00:02:48.108 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:48.108 CC examples/accel/perf/accel_perf.o 00:02:48.108 CXX test/cpp_headers/fd_group.o 00:02:48.108 CC examples/blob/hello_world/hello_blob.o 00:02:48.108 LINK memory_ut 00:02:48.108 LINK pci_ut 00:02:48.108 LINK pmr_persistence 00:02:48.108 CXX test/cpp_headers/fd.o 00:02:48.381 CXX test/cpp_headers/file.o 00:02:48.381 LINK abort 00:02:48.381 LINK hello_blob 00:02:48.381 CXX test/cpp_headers/ftl.o 00:02:48.381 CXX test/cpp_headers/gpt_spec.o 00:02:48.381 CXX test/cpp_headers/hexlify.o 00:02:48.381 CXX test/cpp_headers/histogram_data.o 00:02:48.381 CXX test/cpp_headers/idxd.o 00:02:48.638 CC examples/blob/cli/blobcli.o 00:02:48.638 LINK accel_perf 00:02:48.638 CXX test/cpp_headers/idxd_spec.o 00:02:48.638 CXX test/cpp_headers/init.o 00:02:48.638 CXX test/cpp_headers/ioat.o 00:02:48.638 CXX test/cpp_headers/ioat_spec.o 00:02:48.638 CXX test/cpp_headers/iscsi_spec.o 00:02:48.638 CXX test/cpp_headers/json.o 00:02:48.638 CXX test/cpp_headers/jsonrpc.o 00:02:48.638 CXX test/cpp_headers/keyring.o 00:02:48.638 CXX test/cpp_headers/keyring_module.o 00:02:48.896 CXX test/cpp_headers/likely.o 00:02:48.896 CXX test/cpp_headers/log.o 00:02:48.896 CXX test/cpp_headers/lvol.o 00:02:48.896 CXX test/cpp_headers/memory.o 00:02:48.896 CXX test/cpp_headers/mmio.o 00:02:48.896 CXX test/cpp_headers/nbd.o 00:02:48.896 CXX test/cpp_headers/notify.o 00:02:48.896 CC examples/bdev/hello_world/hello_bdev.o 00:02:48.896 CXX test/cpp_headers/nvme.o 00:02:48.896 CXX test/cpp_headers/nvme_intel.o 00:02:49.155 CC examples/bdev/bdevperf/bdevperf.o 00:02:49.155 LINK blobcli 00:02:49.155 CXX test/cpp_headers/nvme_ocssd.o 00:02:49.155 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:49.155 LINK cuse 00:02:49.155 CXX test/cpp_headers/nvme_spec.o 00:02:49.155 CXX test/cpp_headers/nvme_zns.o 00:02:49.155 CXX test/cpp_headers/nvmf_cmd.o 00:02:49.155 LINK hello_bdev 00:02:49.155 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:49.155 CXX test/cpp_headers/nvmf.o 00:02:49.413 CXX test/cpp_headers/nvmf_spec.o 00:02:49.413 CXX test/cpp_headers/nvmf_transport.o 00:02:49.413 CXX test/cpp_headers/opal.o 00:02:49.413 CXX test/cpp_headers/opal_spec.o 00:02:49.413 CXX test/cpp_headers/pci_ids.o 00:02:49.413 CXX test/cpp_headers/pipe.o 00:02:49.413 CXX test/cpp_headers/queue.o 00:02:49.413 CXX test/cpp_headers/reduce.o 00:02:49.413 CXX test/cpp_headers/rpc.o 00:02:49.413 CXX test/cpp_headers/scheduler.o 00:02:49.413 CXX test/cpp_headers/scsi.o 00:02:49.672 CXX test/cpp_headers/scsi_spec.o 00:02:49.672 CXX test/cpp_headers/sock.o 00:02:49.672 CXX test/cpp_headers/stdinc.o 00:02:49.672 CXX test/cpp_headers/string.o 00:02:49.672 CXX test/cpp_headers/thread.o 00:02:49.672 CXX test/cpp_headers/trace.o 00:02:49.672 CXX test/cpp_headers/trace_parser.o 00:02:49.672 CXX test/cpp_headers/tree.o 00:02:49.672 CXX test/cpp_headers/ublk.o 00:02:49.672 CXX test/cpp_headers/util.o 00:02:49.672 CXX test/cpp_headers/uuid.o 00:02:49.672 CXX test/cpp_headers/version.o 00:02:49.929 CXX test/cpp_headers/vfio_user_pci.o 00:02:49.929 CXX test/cpp_headers/vfio_user_spec.o 00:02:49.929 CXX test/cpp_headers/vhost.o 00:02:49.929 CXX test/cpp_headers/vmd.o 00:02:49.929 CXX test/cpp_headers/xor.o 00:02:49.929 LINK bdevperf 00:02:49.929 CXX test/cpp_headers/zipf.o 00:02:50.494 CC examples/nvmf/nvmf/nvmf.o 00:02:50.751 LINK nvmf 00:02:52.124 LINK esnap 00:02:52.383 00:02:52.383 real 1m3.752s 00:02:52.383 user 6m27.707s 00:02:52.383 sys 1m32.485s 00:02:52.383 11:26:55 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:52.383 11:26:55 make -- common/autotest_common.sh@10 -- $ set +x 00:02:52.383 ************************************ 00:02:52.383 END TEST make 00:02:52.383 ************************************ 00:02:52.383 11:26:55 -- common/autotest_common.sh@1142 -- $ return 0 00:02:52.383 11:26:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:52.383 11:26:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:52.383 11:26:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:52.383 11:26:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.383 11:26:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:52.383 11:26:55 -- pm/common@44 -- $ pid=5145 00:02:52.383 11:26:55 -- pm/common@50 -- $ kill -TERM 5145 00:02:52.383 11:26:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.383 11:26:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:52.383 11:26:55 -- pm/common@44 -- $ pid=5147 00:02:52.383 11:26:55 -- pm/common@50 -- $ kill -TERM 5147 00:02:52.640 11:26:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:52.640 11:26:55 -- nvmf/common.sh@7 -- # uname -s 00:02:52.640 11:26:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:52.640 11:26:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:52.640 11:26:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:52.640 11:26:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:52.640 11:26:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:52.640 11:26:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:52.640 11:26:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:52.640 11:26:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:52.640 11:26:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:52.640 11:26:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:52.640 11:26:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:02:52.640 11:26:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:02:52.640 11:26:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:52.640 11:26:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:52.640 11:26:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:52.640 11:26:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:52.640 11:26:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:52.640 11:26:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:52.641 11:26:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.641 11:26:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.641 11:26:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.641 11:26:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.641 11:26:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.641 11:26:55 -- paths/export.sh@5 -- # export PATH 00:02:52.641 11:26:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.641 11:26:55 -- nvmf/common.sh@47 -- # : 0 00:02:52.641 11:26:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:52.641 11:26:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:52.641 11:26:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:52.641 11:26:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:52.641 11:26:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:52.641 11:26:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:52.641 11:26:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:52.641 11:26:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:52.641 11:26:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:52.641 11:26:55 -- spdk/autotest.sh@32 -- # uname -s 00:02:52.641 11:26:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:52.641 11:26:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:52.641 11:26:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:52.641 11:26:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:52.641 11:26:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:52.641 11:26:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:52.641 11:26:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:52.641 11:26:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:52.641 11:26:55 -- spdk/autotest.sh@48 -- # udevadm_pid=52761 00:02:52.641 11:26:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:52.641 11:26:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:52.641 11:26:55 -- pm/common@17 -- # local monitor 00:02:52.641 11:26:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.641 11:26:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.641 11:26:55 -- pm/common@21 -- # date +%s 00:02:52.641 11:26:55 -- pm/common@25 -- # sleep 1 00:02:52.641 11:26:55 -- pm/common@21 -- # date +%s 00:02:52.641 11:26:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720783615 00:02:52.641 11:26:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720783615 00:02:52.641 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720783615_collect-cpu-load.pm.log 00:02:52.641 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720783615_collect-vmstat.pm.log 00:02:53.575 11:26:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:53.575 11:26:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:53.575 11:26:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:53.575 11:26:56 -- common/autotest_common.sh@10 -- # set +x 00:02:53.575 11:26:56 -- spdk/autotest.sh@59 -- # create_test_list 00:02:53.575 11:26:56 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:53.575 11:26:56 -- common/autotest_common.sh@10 -- # set +x 00:02:53.575 11:26:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:53.575 11:26:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:53.575 11:26:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:53.575 11:26:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:53.575 11:26:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:53.575 11:26:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:53.575 11:26:56 -- common/autotest_common.sh@1455 -- # uname 00:02:53.575 11:26:57 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:53.575 11:26:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:53.575 11:26:57 -- common/autotest_common.sh@1475 -- # uname 00:02:53.575 11:26:57 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:53.575 11:26:57 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:53.575 11:26:57 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:53.575 11:26:57 -- spdk/autotest.sh@72 -- # hash lcov 00:02:53.575 11:26:57 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:53.575 11:26:57 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:53.575 --rc lcov_branch_coverage=1 00:02:53.575 --rc lcov_function_coverage=1 00:02:53.575 --rc genhtml_branch_coverage=1 00:02:53.575 --rc genhtml_function_coverage=1 00:02:53.575 --rc genhtml_legend=1 00:02:53.575 --rc geninfo_all_blocks=1 00:02:53.575 ' 00:02:53.575 11:26:57 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:53.575 --rc lcov_branch_coverage=1 00:02:53.575 --rc lcov_function_coverage=1 00:02:53.575 --rc genhtml_branch_coverage=1 00:02:53.575 --rc genhtml_function_coverage=1 00:02:53.575 --rc genhtml_legend=1 00:02:53.575 --rc geninfo_all_blocks=1 00:02:53.575 ' 00:02:53.575 11:26:57 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:53.575 --rc lcov_branch_coverage=1 00:02:53.575 --rc lcov_function_coverage=1 00:02:53.575 --rc genhtml_branch_coverage=1 00:02:53.575 --rc genhtml_function_coverage=1 00:02:53.575 --rc genhtml_legend=1 00:02:53.575 --rc geninfo_all_blocks=1 00:02:53.575 --no-external' 00:02:53.575 11:26:57 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:53.575 --rc lcov_branch_coverage=1 00:02:53.575 --rc lcov_function_coverage=1 00:02:53.575 --rc genhtml_branch_coverage=1 00:02:53.575 --rc genhtml_function_coverage=1 00:02:53.575 --rc genhtml_legend=1 00:02:53.575 --rc geninfo_all_blocks=1 00:02:53.575 --no-external' 00:02:53.575 11:26:57 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:53.833 lcov: LCOV version 1.14 00:02:53.833 11:26:57 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:12.001 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:12.001 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:21.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:21.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:21.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:21.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:21.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:21.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:21.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:21.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:21.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:21.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:21.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:21.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:22.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:22.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:22.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:22.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:22.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:22.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:22.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:22.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:26.699 11:27:29 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:26.699 11:27:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:26.699 11:27:29 -- common/autotest_common.sh@10 -- # set +x 00:03:26.699 11:27:29 -- spdk/autotest.sh@91 -- # rm -f 00:03:26.699 11:27:29 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:26.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:27.214 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:27.214 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:27.214 11:27:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:27.214 11:27:30 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:27.214 11:27:30 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:27.214 11:27:30 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:27.214 11:27:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.214 11:27:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:27.214 11:27:30 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:27.214 11:27:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.214 11:27:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.214 11:27:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.214 11:27:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:27.214 11:27:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:27.214 11:27:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:27.214 11:27:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.214 11:27:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.214 11:27:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:27.214 11:27:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:27.214 11:27:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:27.214 11:27:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.214 11:27:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.214 11:27:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:27.215 11:27:30 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:27.215 11:27:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:27.215 11:27:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.215 11:27:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:27.215 11:27:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.215 11:27:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.215 11:27:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:27.215 11:27:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:27.215 11:27:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:27.215 No valid GPT data, bailing 00:03:27.215 11:27:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.215 11:27:30 -- scripts/common.sh@391 -- # pt= 00:03:27.215 11:27:30 -- scripts/common.sh@392 -- # return 1 00:03:27.215 11:27:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:27.215 1+0 records in 00:03:27.215 1+0 records out 00:03:27.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504177 s, 208 MB/s 00:03:27.215 11:27:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.215 11:27:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.215 11:27:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:27.215 11:27:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:27.215 11:27:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:27.215 No valid GPT data, bailing 00:03:27.215 11:27:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:27.215 11:27:30 -- scripts/common.sh@391 -- # pt= 00:03:27.215 11:27:30 -- scripts/common.sh@392 -- # return 1 00:03:27.215 11:27:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:27.473 1+0 records in 00:03:27.473 1+0 records out 00:03:27.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481572 s, 218 MB/s 00:03:27.473 11:27:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.473 11:27:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.473 11:27:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:27.473 11:27:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:27.473 11:27:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:27.473 No valid GPT data, bailing 00:03:27.473 11:27:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:27.473 11:27:30 -- scripts/common.sh@391 -- # pt= 00:03:27.473 11:27:30 -- scripts/common.sh@392 -- # return 1 00:03:27.473 11:27:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:27.473 1+0 records in 00:03:27.473 1+0 records out 00:03:27.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00371042 s, 283 MB/s 00:03:27.473 11:27:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.473 11:27:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.473 11:27:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:27.473 11:27:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:27.473 11:27:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:27.473 No valid GPT data, bailing 00:03:27.473 11:27:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:27.473 11:27:30 -- scripts/common.sh@391 -- # pt= 00:03:27.473 11:27:30 -- scripts/common.sh@392 -- # return 1 00:03:27.473 11:27:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:27.473 1+0 records in 00:03:27.473 1+0 records out 00:03:27.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00359994 s, 291 MB/s 00:03:27.473 11:27:30 -- spdk/autotest.sh@118 -- # sync 00:03:27.473 11:27:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.473 11:27:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.473 11:27:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:29.388 11:27:32 -- spdk/autotest.sh@124 -- # uname -s 00:03:29.388 11:27:32 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:29.388 11:27:32 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:29.388 11:27:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.388 11:27:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.388 11:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:29.388 ************************************ 00:03:29.388 START TEST setup.sh 00:03:29.388 ************************************ 00:03:29.388 11:27:32 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:29.388 * Looking for test storage... 00:03:29.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.388 11:27:32 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:29.388 11:27:32 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:29.388 11:27:32 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:29.388 11:27:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.388 11:27:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.388 11:27:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:29.388 ************************************ 00:03:29.388 START TEST acl 00:03:29.388 ************************************ 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:29.388 * Looking for test storage... 00:03:29.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.388 11:27:32 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:29.388 11:27:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:29.388 11:27:32 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:29.388 11:27:32 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:29.388 11:27:32 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:29.388 11:27:32 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:29.388 11:27:32 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:29.388 11:27:32 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.388 11:27:32 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.321 11:27:33 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:30.321 11:27:33 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:30.321 11:27:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.321 11:27:33 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:30.321 11:27:33 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.321 11:27:33 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.921 Hugepages 00:03:30.921 node hugesize free / total 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.921 00:03:30.921 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:30.921 11:27:34 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:30.921 11:27:34 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.921 11:27:34 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.921 11:27:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:30.921 ************************************ 00:03:30.921 START TEST denied 00:03:30.921 ************************************ 00:03:30.921 11:27:34 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:30.921 11:27:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:30.921 11:27:34 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:30.921 11:27:34 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:30.921 11:27:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.921 11:27:34 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:31.854 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.854 11:27:35 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:32.420 00:03:32.420 real 0m1.317s 00:03:32.420 user 0m0.517s 00:03:32.420 sys 0m0.742s 00:03:32.420 11:27:35 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.420 ************************************ 00:03:32.420 END TEST denied 00:03:32.420 11:27:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:32.420 ************************************ 00:03:32.420 11:27:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:32.420 11:27:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:32.420 11:27:35 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.420 11:27:35 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.420 11:27:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:32.420 ************************************ 00:03:32.420 START TEST allowed 00:03:32.420 ************************************ 00:03:32.420 11:27:35 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:32.420 11:27:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:32.420 11:27:35 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:32.420 11:27:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:32.420 11:27:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.420 11:27:35 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:32.986 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:32.986 11:27:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:32.987 11:27:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:32.987 11:27:36 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:32.987 11:27:36 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:32.987 11:27:36 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:32.987 11:27:36 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:32.987 11:27:36 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:32.987 11:27:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:32.987 11:27:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.987 11:27:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:33.923 00:03:33.923 real 0m1.453s 00:03:33.923 user 0m0.615s 00:03:33.923 sys 0m0.826s 00:03:33.923 11:27:37 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.923 11:27:37 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:33.923 ************************************ 00:03:33.923 END TEST allowed 00:03:33.923 ************************************ 00:03:33.923 11:27:37 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:33.923 ************************************ 00:03:33.923 END TEST acl 00:03:33.923 ************************************ 00:03:33.923 00:03:33.923 real 0m4.535s 00:03:33.923 user 0m2.004s 00:03:33.923 sys 0m2.470s 00:03:33.923 11:27:37 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.923 11:27:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:33.923 11:27:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:33.923 11:27:37 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:33.923 11:27:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.923 11:27:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.923 11:27:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.923 ************************************ 00:03:33.923 START TEST hugepages 00:03:33.923 ************************************ 00:03:33.923 11:27:37 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:33.923 * Looking for test storage... 00:03:33.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.923 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 6027872 kB' 'MemAvailable: 7408104 kB' 'Buffers: 2436 kB' 'Cached: 1594472 kB' 'SwapCached: 0 kB' 'Active: 436952 kB' 'Inactive: 1265576 kB' 'Active(anon): 116108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 107260 kB' 'Mapped: 48820 kB' 'Shmem: 10488 kB' 'KReclaimable: 61500 kB' 'Slab: 133136 kB' 'SReclaimable: 61500 kB' 'SUnreclaim: 71636 kB' 'KernelStack: 6364 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 338584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.924 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.925 11:27:37 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:33.925 11:27:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.925 11:27:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.925 11:27:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.184 ************************************ 00:03:34.184 START TEST default_setup 00:03:34.184 ************************************ 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.184 11:27:37 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.751 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.751 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8118684 kB' 'MemAvailable: 9498744 kB' 'Buffers: 2436 kB' 'Cached: 1594460 kB' 'SwapCached: 0 kB' 'Active: 453164 kB' 'Inactive: 1265576 kB' 'Active(anon): 132320 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123428 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132760 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71600 kB' 'KernelStack: 6288 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:34.751 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.752 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8120264 kB' 'MemAvailable: 9500332 kB' 'Buffers: 2436 kB' 'Cached: 1594460 kB' 'SwapCached: 0 kB' 'Active: 452640 kB' 'Inactive: 1265584 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122912 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132760 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71600 kB' 'KernelStack: 6320 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.016 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8121752 kB' 'MemAvailable: 9501820 kB' 'Buffers: 2436 kB' 'Cached: 1594460 kB' 'SwapCached: 0 kB' 'Active: 452888 kB' 'Inactive: 1265584 kB' 'Active(anon): 132044 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123176 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132760 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71600 kB' 'KernelStack: 6320 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.018 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.019 nr_hugepages=1024 00:03:35.019 resv_hugepages=0 00:03:35.019 surplus_hugepages=0 00:03:35.019 anon_hugepages=0 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.019 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8122272 kB' 'MemAvailable: 9502340 kB' 'Buffers: 2436 kB' 'Cached: 1594460 kB' 'SwapCached: 0 kB' 'Active: 452924 kB' 'Inactive: 1265584 kB' 'Active(anon): 132080 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123160 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132760 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71600 kB' 'KernelStack: 6304 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.020 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.021 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8122452 kB' 'MemUsed: 4119512 kB' 'SwapCached: 0 kB' 'Active: 452792 kB' 'Inactive: 1265584 kB' 'Active(anon): 131948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1596896 kB' 'Mapped: 48560 kB' 'AnonPages: 123060 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61160 kB' 'Slab: 132756 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.022 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:35.023 node0=1024 expecting 1024 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.023 00:03:35.023 ************************************ 00:03:35.023 END TEST default_setup 00:03:35.023 ************************************ 00:03:35.023 real 0m0.964s 00:03:35.023 user 0m0.428s 00:03:35.023 sys 0m0.461s 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.023 11:27:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:35.023 11:27:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.023 11:27:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:35.023 11:27:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.023 11:27:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.023 11:27:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.023 ************************************ 00:03:35.023 START TEST per_node_1G_alloc 00:03:35.023 ************************************ 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.023 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.282 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.282 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.283 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9171404 kB' 'MemAvailable: 10551476 kB' 'Buffers: 2436 kB' 'Cached: 1594460 kB' 'SwapCached: 0 kB' 'Active: 452948 kB' 'Inactive: 1265588 kB' 'Active(anon): 132104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123212 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132732 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71572 kB' 'KernelStack: 6324 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.547 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.548 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9171404 kB' 'MemAvailable: 10551476 kB' 'Buffers: 2436 kB' 'Cached: 1594460 kB' 'SwapCached: 0 kB' 'Active: 452860 kB' 'Inactive: 1265588 kB' 'Active(anon): 132016 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123124 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6304 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.549 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9171404 kB' 'MemAvailable: 10551476 kB' 'Buffers: 2436 kB' 'Cached: 1594460 kB' 'SwapCached: 0 kB' 'Active: 452824 kB' 'Inactive: 1265588 kB' 'Active(anon): 131980 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123124 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6304 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.550 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.551 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.552 nr_hugepages=512 00:03:35.552 resv_hugepages=0 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.552 surplus_hugepages=0 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.552 anon_hugepages=0 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9171924 kB' 'MemAvailable: 10551996 kB' 'Buffers: 2436 kB' 'Cached: 1594460 kB' 'SwapCached: 0 kB' 'Active: 453104 kB' 'Inactive: 1265588 kB' 'Active(anon): 132260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123716 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132720 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71560 kB' 'KernelStack: 6336 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.552 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.553 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9172056 kB' 'MemUsed: 3069908 kB' 'SwapCached: 0 kB' 'Active: 452852 kB' 'Inactive: 1265592 kB' 'Active(anon): 132008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1596900 kB' 'Mapped: 48560 kB' 'AnonPages: 123156 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61160 kB' 'Slab: 132712 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.554 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.555 node0=512 expecting 512 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:35.555 00:03:35.555 real 0m0.515s 00:03:35.555 user 0m0.251s 00:03:35.555 sys 0m0.268s 00:03:35.555 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.556 11:27:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.556 ************************************ 00:03:35.556 END TEST per_node_1G_alloc 00:03:35.556 ************************************ 00:03:35.556 11:27:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.556 11:27:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:35.556 11:27:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.556 11:27:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.556 11:27:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.556 ************************************ 00:03:35.556 START TEST even_2G_alloc 00:03:35.556 ************************************ 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.556 11:27:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.078 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.078 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8128044 kB' 'MemAvailable: 9508116 kB' 'Buffers: 2436 kB' 'Cached: 1594460 kB' 'SwapCached: 0 kB' 'Active: 452820 kB' 'Inactive: 1265588 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265588 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123424 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6308 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.079 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8128044 kB' 'MemAvailable: 9508120 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452916 kB' 'Inactive: 1265592 kB' 'Active(anon): 132072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123188 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6360 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.080 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8128044 kB' 'MemAvailable: 9508120 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452808 kB' 'Inactive: 1265592 kB' 'Active(anon): 131964 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123124 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6328 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.081 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.082 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.083 nr_hugepages=1024 00:03:36.083 resv_hugepages=0 00:03:36.083 surplus_hugepages=0 00:03:36.083 anon_hugepages=0 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8128044 kB' 'MemAvailable: 9508120 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452736 kB' 'Inactive: 1265592 kB' 'Active(anon): 131892 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123312 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6280 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.083 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.084 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8128044 kB' 'MemUsed: 4113920 kB' 'SwapCached: 0 kB' 'Active: 452824 kB' 'Inactive: 1265592 kB' 'Active(anon): 131980 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1596900 kB' 'Mapped: 48560 kB' 'AnonPages: 123164 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.085 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.086 node0=1024 expecting 1024 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.086 00:03:36.086 real 0m0.497s 00:03:36.086 user 0m0.250s 00:03:36.086 sys 0m0.278s 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.086 11:27:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.086 ************************************ 00:03:36.086 END TEST even_2G_alloc 00:03:36.086 ************************************ 00:03:36.086 11:27:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:36.086 11:27:39 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:36.086 11:27:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.086 11:27:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.086 11:27:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.086 ************************************ 00:03:36.086 START TEST odd_alloc 00:03:36.086 ************************************ 00:03:36.086 11:27:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:36.086 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:36.086 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:36.086 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.087 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.608 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.608 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.608 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:36.608 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8126584 kB' 'MemAvailable: 9506660 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452844 kB' 'Inactive: 1265592 kB' 'Active(anon): 132000 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123772 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6356 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.609 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8126332 kB' 'MemAvailable: 9506408 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452600 kB' 'Inactive: 1265592 kB' 'Active(anon): 131756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122900 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6304 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.610 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.611 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8126332 kB' 'MemAvailable: 9506408 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452812 kB' 'Inactive: 1265592 kB' 'Active(anon): 131968 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123116 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6288 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.612 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.613 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.614 nr_hugepages=1025 00:03:36.614 resv_hugepages=0 00:03:36.614 surplus_hugepages=0 00:03:36.614 anon_hugepages=0 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8126332 kB' 'MemAvailable: 9506408 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452568 kB' 'Inactive: 1265592 kB' 'Active(anon): 131724 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123132 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'KernelStack: 6304 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.614 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.615 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8126332 kB' 'MemUsed: 4115632 kB' 'SwapCached: 0 kB' 'Active: 452592 kB' 'Inactive: 1265592 kB' 'Active(anon): 131748 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1596900 kB' 'Mapped: 48560 kB' 'AnonPages: 123160 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.616 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.617 node0=1025 expecting 1025 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:36.617 00:03:36.617 real 0m0.524s 00:03:36.617 user 0m0.250s 00:03:36.617 sys 0m0.279s 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.617 11:27:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.617 ************************************ 00:03:36.617 END TEST odd_alloc 00:03:36.617 ************************************ 00:03:36.617 11:27:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:36.617 11:27:40 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:36.617 11:27:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.617 11:27:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.617 11:27:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.617 ************************************ 00:03:36.617 START TEST custom_alloc 00:03:36.617 ************************************ 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:36.617 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:36.876 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:36.876 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.876 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:37.139 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.139 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.139 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9173040 kB' 'MemAvailable: 10553116 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452984 kB' 'Inactive: 1265592 kB' 'Active(anon): 132140 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123236 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132768 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71608 kB' 'KernelStack: 6340 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.139 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.140 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9173040 kB' 'MemAvailable: 10553116 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452624 kB' 'Inactive: 1265592 kB' 'Active(anon): 131780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123220 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132764 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71604 kB' 'KernelStack: 6352 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.141 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.142 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9173040 kB' 'MemAvailable: 10553116 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452680 kB' 'Inactive: 1265592 kB' 'Active(anon): 131836 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132756 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71596 kB' 'KernelStack: 6288 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.143 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.144 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:37.145 nr_hugepages=512 00:03:37.145 resv_hugepages=0 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.145 surplus_hugepages=0 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.145 anon_hugepages=0 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.145 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9173292 kB' 'MemAvailable: 10553368 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 1265592 kB' 'Active(anon): 131812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122948 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132756 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71596 kB' 'KernelStack: 6272 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.146 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.147 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 9173876 kB' 'MemUsed: 3068088 kB' 'SwapCached: 0 kB' 'Active: 452736 kB' 'Inactive: 1265592 kB' 'Active(anon): 131892 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1596900 kB' 'Mapped: 48560 kB' 'AnonPages: 123284 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61160 kB' 'Slab: 132756 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.148 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:37.149 node0=512 expecting 512 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:37.149 00:03:37.149 real 0m0.514s 00:03:37.149 user 0m0.266s 00:03:37.149 sys 0m0.277s 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.149 11:27:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:37.149 ************************************ 00:03:37.149 END TEST custom_alloc 00:03:37.149 ************************************ 00:03:37.408 11:27:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:37.408 11:27:40 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:37.408 11:27:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.408 11:27:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.408 11:27:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.408 ************************************ 00:03:37.408 START TEST no_shrink_alloc 00:03:37.408 ************************************ 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.408 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:37.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.673 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.673 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8121840 kB' 'MemAvailable: 9501916 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452704 kB' 'Inactive: 1265592 kB' 'Active(anon): 131860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123216 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132744 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71584 kB' 'KernelStack: 6320 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.673 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8121840 kB' 'MemAvailable: 9501916 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452940 kB' 'Inactive: 1265592 kB' 'Active(anon): 132096 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123236 kB' 'Mapped: 48564 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132744 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71584 kB' 'KernelStack: 6320 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.674 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.675 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8122256 kB' 'MemAvailable: 9502332 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 1265592 kB' 'Active(anon): 131812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122976 kB' 'Mapped: 48564 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132744 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71584 kB' 'KernelStack: 6320 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.676 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.677 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:37.678 nr_hugepages=1024 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.678 resv_hugepages=0 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.678 surplus_hugepages=0 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.678 anon_hugepages=0 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8122256 kB' 'MemAvailable: 9502332 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 1265592 kB' 'Active(anon): 131812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122976 kB' 'Mapped: 48564 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132732 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71572 kB' 'KernelStack: 6320 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 354596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.678 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.679 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8122256 kB' 'MemUsed: 4119708 kB' 'SwapCached: 0 kB' 'Active: 452608 kB' 'Inactive: 1265592 kB' 'Active(anon): 131764 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1596900 kB' 'Mapped: 48564 kB' 'AnonPages: 122932 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61160 kB' 'Slab: 132728 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.680 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.967 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.968 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.969 node0=1024 expecting 1024 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.969 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:38.229 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:38.229 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:38.229 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:38.229 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:38.229 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:38.229 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.229 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.229 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.229 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8120996 kB' 'MemAvailable: 9501068 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 448236 kB' 'Inactive: 1265592 kB' 'Active(anon): 127392 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118508 kB' 'Mapped: 47880 kB' 'Shmem: 10464 kB' 'KReclaimable: 61152 kB' 'Slab: 132540 kB' 'SReclaimable: 61152 kB' 'SUnreclaim: 71388 kB' 'KernelStack: 6164 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.230 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8120996 kB' 'MemAvailable: 9501068 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 448028 kB' 'Inactive: 1265592 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118344 kB' 'Mapped: 47764 kB' 'Shmem: 10464 kB' 'KReclaimable: 61152 kB' 'Slab: 132532 kB' 'SReclaimable: 61152 kB' 'SUnreclaim: 71380 kB' 'KernelStack: 6208 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.231 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.232 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8120996 kB' 'MemAvailable: 9501068 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 447736 kB' 'Inactive: 1265592 kB' 'Active(anon): 126892 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118332 kB' 'Mapped: 47824 kB' 'Shmem: 10464 kB' 'KReclaimable: 61152 kB' 'Slab: 132528 kB' 'SReclaimable: 61152 kB' 'SUnreclaim: 71376 kB' 'KernelStack: 6224 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.233 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.234 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.235 nr_hugepages=1024 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.235 resv_hugepages=0 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.235 surplus_hugepages=0 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.235 anon_hugepages=0 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8120996 kB' 'MemAvailable: 9501068 kB' 'Buffers: 2436 kB' 'Cached: 1594464 kB' 'SwapCached: 0 kB' 'Active: 447652 kB' 'Inactive: 1265592 kB' 'Active(anon): 126808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118216 kB' 'Mapped: 47824 kB' 'Shmem: 10464 kB' 'KReclaimable: 61152 kB' 'Slab: 132528 kB' 'SReclaimable: 61152 kB' 'SUnreclaim: 71376 kB' 'KernelStack: 6192 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 336200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.235 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.236 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8120996 kB' 'MemUsed: 4120968 kB' 'SwapCached: 0 kB' 'Active: 447916 kB' 'Inactive: 1265592 kB' 'Active(anon): 127072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1265592 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1596900 kB' 'Mapped: 47824 kB' 'AnonPages: 118224 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61152 kB' 'Slab: 132528 kB' 'SReclaimable: 61152 kB' 'SUnreclaim: 71376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.237 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.238 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.496 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.496 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.496 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.496 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.496 node0=1024 expecting 1024 00:03:38.496 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.496 11:27:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.496 00:03:38.496 real 0m1.071s 00:03:38.496 user 0m0.509s 00:03:38.496 sys 0m0.570s 00:03:38.496 11:27:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.496 11:27:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.496 ************************************ 00:03:38.496 END TEST no_shrink_alloc 00:03:38.496 ************************************ 00:03:38.496 11:27:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:38.496 11:27:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:38.496 11:27:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:38.497 11:27:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:38.497 11:27:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.497 11:27:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.497 11:27:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:38.497 11:27:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:38.497 11:27:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:38.497 11:27:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:38.497 00:03:38.497 real 0m4.499s 00:03:38.497 user 0m2.109s 00:03:38.497 sys 0m2.376s 00:03:38.497 11:27:41 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:38.497 ************************************ 00:03:38.497 END TEST hugepages 00:03:38.497 ************************************ 00:03:38.497 11:27:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.497 11:27:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:38.497 11:27:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:38.497 11:27:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.497 11:27:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.497 11:27:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.497 ************************************ 00:03:38.497 START TEST driver 00:03:38.497 ************************************ 00:03:38.497 11:27:41 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:38.497 * Looking for test storage... 00:03:38.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:38.497 11:27:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:38.497 11:27:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.497 11:27:41 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.073 11:27:42 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:39.073 11:27:42 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.073 11:27:42 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.073 11:27:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.073 ************************************ 00:03:39.073 START TEST guess_driver 00:03:39.073 ************************************ 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:39.073 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:39.073 Looking for driver=uio_pci_generic 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.073 11:27:42 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:39.642 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:39.642 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:39.642 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.899 11:27:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.465 00:03:40.465 real 0m1.330s 00:03:40.465 user 0m0.508s 00:03:40.465 sys 0m0.860s 00:03:40.465 11:27:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.465 11:27:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.465 ************************************ 00:03:40.465 END TEST guess_driver 00:03:40.465 ************************************ 00:03:40.465 11:27:43 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:40.465 ************************************ 00:03:40.465 END TEST driver 00:03:40.465 ************************************ 00:03:40.465 00:03:40.465 real 0m1.995s 00:03:40.465 user 0m0.754s 00:03:40.465 sys 0m1.341s 00:03:40.465 11:27:43 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.465 11:27:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:40.465 11:27:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:40.465 11:27:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:40.465 11:27:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.465 11:27:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.465 11:27:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.465 ************************************ 00:03:40.465 START TEST devices 00:03:40.465 ************************************ 00:03:40.465 11:27:43 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:40.465 * Looking for test storage... 00:03:40.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:40.465 11:27:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:40.465 11:27:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:40.465 11:27:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.465 11:27:43 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:41.401 11:27:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:41.401 No valid GPT data, bailing 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:41.401 11:27:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:41.401 11:27:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:41.401 11:27:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:41.401 No valid GPT data, bailing 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:41.401 11:27:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:41.401 11:27:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:41.401 11:27:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:41.401 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:41.401 11:27:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:41.659 No valid GPT data, bailing 00:03:41.659 11:27:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:41.659 11:27:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:41.659 11:27:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:41.659 11:27:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:41.659 11:27:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:41.659 11:27:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:41.659 11:27:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:41.659 11:27:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:41.659 No valid GPT data, bailing 00:03:41.659 11:27:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:41.659 11:27:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:41.659 11:27:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:41.659 11:27:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:41.659 11:27:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:41.659 11:27:44 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:41.659 11:27:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:41.660 11:27:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:41.660 11:27:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:41.660 11:27:44 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.660 11:27:44 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.660 11:27:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:41.660 ************************************ 00:03:41.660 START TEST nvme_mount 00:03:41.660 ************************************ 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:41.660 11:27:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:42.594 Creating new GPT entries in memory. 00:03:42.594 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:42.594 other utilities. 00:03:42.594 11:27:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:42.594 11:27:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.594 11:27:46 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:42.594 11:27:46 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:42.594 11:27:46 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:43.972 Creating new GPT entries in memory. 00:03:43.972 The operation has completed successfully. 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56984 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.972 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.231 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.231 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.231 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.231 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.231 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.231 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:44.231 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.232 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:44.232 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:44.232 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:44.232 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.232 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.232 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.232 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:44.232 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.232 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.232 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.491 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:44.491 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:44.491 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:44.491 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.491 11:27:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:44.750 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.750 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:44.750 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:44.750 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.750 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.750 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.008 11:27:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:45.267 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.267 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:45.267 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:45.267 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.268 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.268 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:45.526 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.526 00:03:45.526 real 0m3.958s 00:03:45.526 user 0m0.683s 00:03:45.526 sys 0m1.024s 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.526 11:27:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:45.526 ************************************ 00:03:45.526 END TEST nvme_mount 00:03:45.527 ************************************ 00:03:45.785 11:27:48 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:45.785 11:27:48 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:45.785 11:27:48 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.785 11:27:48 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.785 11:27:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:45.785 ************************************ 00:03:45.785 START TEST dm_mount 00:03:45.785 ************************************ 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:45.785 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:46.769 Creating new GPT entries in memory. 00:03:46.769 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:46.769 other utilities. 00:03:46.769 11:27:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:46.769 11:27:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.769 11:27:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.769 11:27:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.769 11:27:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:47.704 Creating new GPT entries in memory. 00:03:47.704 The operation has completed successfully. 00:03:47.704 11:27:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:47.704 11:27:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.704 11:27:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.704 11:27:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.704 11:27:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:48.637 The operation has completed successfully. 00:03:48.637 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.637 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.637 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57411 00:03:48.637 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:48.637 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.637 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:48.637 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.896 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.155 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.155 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.155 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.155 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.413 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.414 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.672 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.672 11:27:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.672 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:49.672 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.672 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.672 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:49.672 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:49.672 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:49.672 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:49.929 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:49.929 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:49.929 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.929 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:49.929 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.929 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:49.930 11:27:53 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:49.930 00:03:49.930 real 0m4.190s 00:03:49.930 user 0m0.449s 00:03:49.930 sys 0m0.679s 00:03:49.930 11:27:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.930 ************************************ 00:03:49.930 END TEST dm_mount 00:03:49.930 ************************************ 00:03:49.930 11:27:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:49.930 11:27:53 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:49.930 11:27:53 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:49.930 11:27:53 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:49.930 11:27:53 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:49.930 11:27:53 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.930 11:27:53 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.930 11:27:53 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.930 11:27:53 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.187 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:50.187 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:50.187 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.187 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.187 11:27:53 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:50.187 11:27:53 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:50.187 11:27:53 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.187 11:27:53 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.187 11:27:53 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.187 11:27:53 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.187 11:27:53 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:50.187 00:03:50.187 real 0m9.706s 00:03:50.187 user 0m1.786s 00:03:50.187 sys 0m2.323s 00:03:50.187 11:27:53 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.187 ************************************ 00:03:50.187 END TEST devices 00:03:50.187 11:27:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.187 ************************************ 00:03:50.187 11:27:53 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:50.187 00:03:50.187 real 0m21.012s 00:03:50.187 user 0m6.749s 00:03:50.187 sys 0m8.684s 00:03:50.187 11:27:53 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.187 11:27:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.187 ************************************ 00:03:50.187 END TEST setup.sh 00:03:50.187 ************************************ 00:03:50.187 11:27:53 -- common/autotest_common.sh@1142 -- # return 0 00:03:50.187 11:27:53 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:51.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.123 Hugepages 00:03:51.123 node hugesize free / total 00:03:51.123 node0 1048576kB 0 / 0 00:03:51.123 node0 2048kB 2048 / 2048 00:03:51.123 00:03:51.123 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.123 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:51.123 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:51.123 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:51.123 11:27:54 -- spdk/autotest.sh@130 -- # uname -s 00:03:51.123 11:27:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:51.123 11:27:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:51.123 11:27:54 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:51.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.947 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:51.947 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:51.947 11:27:55 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:52.881 11:27:56 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:52.881 11:27:56 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:52.881 11:27:56 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:52.881 11:27:56 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:52.881 11:27:56 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:52.881 11:27:56 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:52.881 11:27:56 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.140 11:27:56 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:53.140 11:27:56 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:53.140 11:27:56 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:53.140 11:27:56 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:53.140 11:27:56 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.398 Waiting for block devices as requested 00:03:53.398 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:53.398 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:53.656 11:27:56 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:53.656 11:27:56 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:53.656 11:27:56 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.656 11:27:56 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:03:53.656 11:27:56 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:53.656 11:27:56 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:53.656 11:27:56 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:53.656 11:27:56 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:03:53.656 11:27:56 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:03:53.656 11:27:56 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:03:53.656 11:27:56 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:03:53.656 11:27:56 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:53.656 11:27:56 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:53.656 11:27:56 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:53.656 11:27:56 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:53.656 11:27:56 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:53.656 11:27:56 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:03:53.656 11:27:56 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:53.656 11:27:56 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:53.656 11:27:56 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:53.656 11:27:56 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:53.656 11:27:56 -- common/autotest_common.sh@1557 -- # continue 00:03:53.656 11:27:56 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:53.656 11:27:56 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:53.656 11:27:56 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:53.656 11:27:56 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:03:53.656 11:27:56 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:53.656 11:27:56 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:53.656 11:27:56 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:53.656 11:27:56 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:53.656 11:27:56 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:53.656 11:27:56 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:53.656 11:27:56 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:53.656 11:27:56 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:53.656 11:27:56 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:53.656 11:27:56 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:53.656 11:27:56 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:53.656 11:27:56 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:53.656 11:27:56 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:53.656 11:27:56 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:53.656 11:27:56 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:53.657 11:27:56 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:53.657 11:27:56 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:53.657 11:27:56 -- common/autotest_common.sh@1557 -- # continue 00:03:53.657 11:27:56 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:53.657 11:27:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.657 11:27:56 -- common/autotest_common.sh@10 -- # set +x 00:03:53.657 11:27:56 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:53.657 11:27:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.657 11:27:56 -- common/autotest_common.sh@10 -- # set +x 00:03:53.657 11:27:56 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.223 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.482 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:54.482 11:27:57 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:54.482 11:27:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:54.482 11:27:57 -- common/autotest_common.sh@10 -- # set +x 00:03:54.482 11:27:57 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:54.482 11:27:57 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:54.482 11:27:57 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:54.482 11:27:57 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:54.482 11:27:57 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:54.482 11:27:57 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:54.482 11:27:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:54.482 11:27:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:54.482 11:27:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.482 11:27:57 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:54.482 11:27:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:54.482 11:27:57 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:54.482 11:27:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:54.482 11:27:57 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:54.482 11:27:57 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:54.482 11:27:57 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:54.482 11:27:57 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:54.482 11:27:57 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:54.482 11:27:57 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:54.482 11:27:57 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:54.482 11:27:57 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:54.482 11:27:57 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:54.482 11:27:57 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:54.482 11:27:57 -- common/autotest_common.sh@1593 -- # return 0 00:03:54.482 11:27:57 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:54.482 11:27:57 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:54.482 11:27:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:54.482 11:27:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:54.482 11:27:57 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:54.482 11:27:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:54.482 11:27:57 -- common/autotest_common.sh@10 -- # set +x 00:03:54.482 11:27:57 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:03:54.482 11:27:57 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:03:54.482 11:27:57 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:03:54.482 11:27:57 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:54.482 11:27:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.482 11:27:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.482 11:27:57 -- common/autotest_common.sh@10 -- # set +x 00:03:54.482 ************************************ 00:03:54.482 START TEST env 00:03:54.482 ************************************ 00:03:54.482 11:27:57 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:54.740 * Looking for test storage... 00:03:54.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:54.740 11:27:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:54.740 11:27:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.740 11:27:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.740 11:27:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.740 ************************************ 00:03:54.740 START TEST env_memory 00:03:54.740 ************************************ 00:03:54.740 11:27:57 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:54.740 00:03:54.740 00:03:54.740 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.740 http://cunit.sourceforge.net/ 00:03:54.740 00:03:54.740 00:03:54.740 Suite: memory 00:03:54.740 Test: alloc and free memory map ...[2024-07-12 11:27:57.996703] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:54.740 passed 00:03:54.740 Test: mem map translation ...[2024-07-12 11:27:58.021766] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:54.740 [2024-07-12 11:27:58.021834] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:54.740 [2024-07-12 11:27:58.021883] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:54.740 [2024-07-12 11:27:58.021893] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:54.740 passed 00:03:54.740 Test: mem map registration ...[2024-07-12 11:27:58.072175] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:54.740 [2024-07-12 11:27:58.072218] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:54.740 passed 00:03:54.740 Test: mem map adjacent registrations ...passed 00:03:54.740 00:03:54.740 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.740 suites 1 1 n/a 0 0 00:03:54.740 tests 4 4 4 0 0 00:03:54.740 asserts 152 152 152 0 n/a 00:03:54.740 00:03:54.740 Elapsed time = 0.174 seconds 00:03:54.740 00:03:54.740 real 0m0.191s 00:03:54.740 user 0m0.176s 00:03:54.740 sys 0m0.011s 00:03:54.740 11:27:58 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.740 11:27:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:54.740 ************************************ 00:03:54.740 END TEST env_memory 00:03:54.740 ************************************ 00:03:54.740 11:27:58 env -- common/autotest_common.sh@1142 -- # return 0 00:03:54.740 11:27:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:54.740 11:27:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.740 11:27:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.740 11:27:58 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.999 ************************************ 00:03:54.999 START TEST env_vtophys 00:03:54.999 ************************************ 00:03:54.999 11:27:58 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:54.999 EAL: lib.eal log level changed from notice to debug 00:03:54.999 EAL: Detected lcore 0 as core 0 on socket 0 00:03:54.999 EAL: Detected lcore 1 as core 0 on socket 0 00:03:54.999 EAL: Detected lcore 2 as core 0 on socket 0 00:03:54.999 EAL: Detected lcore 3 as core 0 on socket 0 00:03:54.999 EAL: Detected lcore 4 as core 0 on socket 0 00:03:54.999 EAL: Detected lcore 5 as core 0 on socket 0 00:03:54.999 EAL: Detected lcore 6 as core 0 on socket 0 00:03:54.999 EAL: Detected lcore 7 as core 0 on socket 0 00:03:54.999 EAL: Detected lcore 8 as core 0 on socket 0 00:03:54.999 EAL: Detected lcore 9 as core 0 on socket 0 00:03:54.999 EAL: Maximum logical cores by configuration: 128 00:03:54.999 EAL: Detected CPU lcores: 10 00:03:54.999 EAL: Detected NUMA nodes: 1 00:03:54.999 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:54.999 EAL: Detected shared linkage of DPDK 00:03:54.999 EAL: No shared files mode enabled, IPC will be disabled 00:03:54.999 EAL: Selected IOVA mode 'PA' 00:03:54.999 EAL: Probing VFIO support... 00:03:54.999 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:54.999 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:54.999 EAL: Ask a virtual area of 0x2e000 bytes 00:03:54.999 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:54.999 EAL: Setting up physically contiguous memory... 00:03:54.999 EAL: Setting maximum number of open files to 524288 00:03:54.999 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:54.999 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:54.999 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.999 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:54.999 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.999 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.999 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:54.999 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:54.999 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.999 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:54.999 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.999 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.999 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:54.999 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:54.999 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.999 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:54.999 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.999 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.999 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:54.999 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:54.999 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.999 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:54.999 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.999 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.999 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:54.999 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:54.999 EAL: Hugepages will be freed exactly as allocated. 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: TSC frequency is ~2200000 KHz 00:03:54.999 EAL: Main lcore 0 is ready (tid=7fbe01c48a00;cpuset=[0]) 00:03:54.999 EAL: Trying to obtain current memory policy. 00:03:54.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.999 EAL: Restoring previous memory policy: 0 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was expanded by 2MB 00:03:54.999 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:54.999 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:54.999 EAL: Mem event callback 'spdk:(nil)' registered 00:03:54.999 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:54.999 00:03:54.999 00:03:54.999 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.999 http://cunit.sourceforge.net/ 00:03:54.999 00:03:54.999 00:03:54.999 Suite: components_suite 00:03:54.999 Test: vtophys_malloc_test ...passed 00:03:54.999 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:54.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.999 EAL: Restoring previous memory policy: 4 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was expanded by 4MB 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was shrunk by 4MB 00:03:54.999 EAL: Trying to obtain current memory policy. 00:03:54.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.999 EAL: Restoring previous memory policy: 4 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was expanded by 6MB 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was shrunk by 6MB 00:03:54.999 EAL: Trying to obtain current memory policy. 00:03:54.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.999 EAL: Restoring previous memory policy: 4 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was expanded by 10MB 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was shrunk by 10MB 00:03:54.999 EAL: Trying to obtain current memory policy. 00:03:54.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.999 EAL: Restoring previous memory policy: 4 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was expanded by 18MB 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was shrunk by 18MB 00:03:54.999 EAL: Trying to obtain current memory policy. 00:03:54.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.999 EAL: Restoring previous memory policy: 4 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was expanded by 34MB 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was shrunk by 34MB 00:03:54.999 EAL: Trying to obtain current memory policy. 00:03:54.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.999 EAL: Restoring previous memory policy: 4 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was expanded by 66MB 00:03:54.999 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.999 EAL: request: mp_malloc_sync 00:03:54.999 EAL: No shared files mode enabled, IPC is disabled 00:03:54.999 EAL: Heap on socket 0 was shrunk by 66MB 00:03:54.999 EAL: Trying to obtain current memory policy. 00:03:54.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.258 EAL: Restoring previous memory policy: 4 00:03:55.258 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.258 EAL: request: mp_malloc_sync 00:03:55.258 EAL: No shared files mode enabled, IPC is disabled 00:03:55.258 EAL: Heap on socket 0 was expanded by 130MB 00:03:55.258 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.258 EAL: request: mp_malloc_sync 00:03:55.258 EAL: No shared files mode enabled, IPC is disabled 00:03:55.258 EAL: Heap on socket 0 was shrunk by 130MB 00:03:55.258 EAL: Trying to obtain current memory policy. 00:03:55.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.258 EAL: Restoring previous memory policy: 4 00:03:55.258 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.258 EAL: request: mp_malloc_sync 00:03:55.258 EAL: No shared files mode enabled, IPC is disabled 00:03:55.258 EAL: Heap on socket 0 was expanded by 258MB 00:03:55.258 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.258 EAL: request: mp_malloc_sync 00:03:55.258 EAL: No shared files mode enabled, IPC is disabled 00:03:55.258 EAL: Heap on socket 0 was shrunk by 258MB 00:03:55.258 EAL: Trying to obtain current memory policy. 00:03:55.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.516 EAL: Restoring previous memory policy: 4 00:03:55.516 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.516 EAL: request: mp_malloc_sync 00:03:55.516 EAL: No shared files mode enabled, IPC is disabled 00:03:55.516 EAL: Heap on socket 0 was expanded by 514MB 00:03:55.516 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.774 EAL: request: mp_malloc_sync 00:03:55.774 EAL: No shared files mode enabled, IPC is disabled 00:03:55.774 EAL: Heap on socket 0 was shrunk by 514MB 00:03:55.774 EAL: Trying to obtain current memory policy. 00:03:55.774 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.034 EAL: Restoring previous memory policy: 4 00:03:56.034 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.034 EAL: request: mp_malloc_sync 00:03:56.034 EAL: No shared files mode enabled, IPC is disabled 00:03:56.034 EAL: Heap on socket 0 was expanded by 1026MB 00:03:56.034 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.292 EAL: request: mp_malloc_sync 00:03:56.292 EAL: No shared files mode enabled, IPC is disabled 00:03:56.292 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:56.292 passed 00:03:56.292 00:03:56.292 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.292 suites 1 1 n/a 0 0 00:03:56.292 tests 2 2 2 0 0 00:03:56.292 asserts 5260 5260 5260 0 n/a 00:03:56.292 00:03:56.292 Elapsed time = 1.288 seconds 00:03:56.292 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.292 EAL: request: mp_malloc_sync 00:03:56.292 EAL: No shared files mode enabled, IPC is disabled 00:03:56.292 EAL: Heap on socket 0 was shrunk by 2MB 00:03:56.292 EAL: No shared files mode enabled, IPC is disabled 00:03:56.292 EAL: No shared files mode enabled, IPC is disabled 00:03:56.292 EAL: No shared files mode enabled, IPC is disabled 00:03:56.292 00:03:56.292 real 0m1.485s 00:03:56.292 user 0m0.811s 00:03:56.292 sys 0m0.540s 00:03:56.292 11:27:59 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.292 11:27:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:56.292 ************************************ 00:03:56.292 END TEST env_vtophys 00:03:56.292 ************************************ 00:03:56.292 11:27:59 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.292 11:27:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:56.292 11:27:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.292 11:27:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.292 11:27:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.292 ************************************ 00:03:56.292 START TEST env_pci 00:03:56.292 ************************************ 00:03:56.292 11:27:59 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:56.292 00:03:56.292 00:03:56.292 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.292 http://cunit.sourceforge.net/ 00:03:56.292 00:03:56.292 00:03:56.292 Suite: pci 00:03:56.292 Test: pci_hook ...[2024-07-12 11:27:59.734350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58599 has claimed it 00:03:56.292 passed 00:03:56.292 00:03:56.292 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.292 suites 1 1 n/a 0 0 00:03:56.292 tests 1 1 1 0 0 00:03:56.292 asserts 25 25 25 0 n/a 00:03:56.292 00:03:56.292 Elapsed time = 0.003 seconds 00:03:56.292 EAL: Cannot find device (10000:00:01.0) 00:03:56.292 EAL: Failed to attach device on primary process 00:03:56.550 00:03:56.550 real 0m0.021s 00:03:56.550 user 0m0.006s 00:03:56.550 sys 0m0.014s 00:03:56.550 11:27:59 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.550 11:27:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:56.550 ************************************ 00:03:56.550 END TEST env_pci 00:03:56.550 ************************************ 00:03:56.550 11:27:59 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.550 11:27:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:56.550 11:27:59 env -- env/env.sh@15 -- # uname 00:03:56.550 11:27:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:56.550 11:27:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:56.551 11:27:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:56.551 11:27:59 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:56.551 11:27:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.551 11:27:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.551 ************************************ 00:03:56.551 START TEST env_dpdk_post_init 00:03:56.551 ************************************ 00:03:56.551 11:27:59 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:56.551 EAL: Detected CPU lcores: 10 00:03:56.551 EAL: Detected NUMA nodes: 1 00:03:56.551 EAL: Detected shared linkage of DPDK 00:03:56.551 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.551 EAL: Selected IOVA mode 'PA' 00:03:56.551 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:56.551 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:56.551 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:56.551 Starting DPDK initialization... 00:03:56.551 Starting SPDK post initialization... 00:03:56.551 SPDK NVMe probe 00:03:56.551 Attaching to 0000:00:10.0 00:03:56.551 Attaching to 0000:00:11.0 00:03:56.551 Attached to 0000:00:10.0 00:03:56.551 Attached to 0000:00:11.0 00:03:56.551 Cleaning up... 00:03:56.551 00:03:56.551 real 0m0.185s 00:03:56.551 user 0m0.049s 00:03:56.551 sys 0m0.034s 00:03:56.551 11:27:59 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.551 11:27:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:56.551 ************************************ 00:03:56.551 END TEST env_dpdk_post_init 00:03:56.551 ************************************ 00:03:56.809 11:28:00 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.809 11:28:00 env -- env/env.sh@26 -- # uname 00:03:56.809 11:28:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:56.809 11:28:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.809 11:28:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.809 11:28:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.809 11:28:00 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.809 ************************************ 00:03:56.809 START TEST env_mem_callbacks 00:03:56.809 ************************************ 00:03:56.809 11:28:00 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.809 EAL: Detected CPU lcores: 10 00:03:56.809 EAL: Detected NUMA nodes: 1 00:03:56.809 EAL: Detected shared linkage of DPDK 00:03:56.809 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.809 EAL: Selected IOVA mode 'PA' 00:03:56.809 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:56.809 00:03:56.809 00:03:56.809 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.809 http://cunit.sourceforge.net/ 00:03:56.809 00:03:56.809 00:03:56.809 Suite: memory 00:03:56.809 Test: test ... 00:03:56.809 register 0x200000200000 2097152 00:03:56.809 malloc 3145728 00:03:56.809 register 0x200000400000 4194304 00:03:56.809 buf 0x200000500000 len 3145728 PASSED 00:03:56.809 malloc 64 00:03:56.809 buf 0x2000004fff40 len 64 PASSED 00:03:56.809 malloc 4194304 00:03:56.809 register 0x200000800000 6291456 00:03:56.809 buf 0x200000a00000 len 4194304 PASSED 00:03:56.809 free 0x200000500000 3145728 00:03:56.809 free 0x2000004fff40 64 00:03:56.809 unregister 0x200000400000 4194304 PASSED 00:03:56.809 free 0x200000a00000 4194304 00:03:56.809 unregister 0x200000800000 6291456 PASSED 00:03:56.809 malloc 8388608 00:03:56.809 register 0x200000400000 10485760 00:03:56.809 buf 0x200000600000 len 8388608 PASSED 00:03:56.809 free 0x200000600000 8388608 00:03:56.809 unregister 0x200000400000 10485760 PASSED 00:03:56.809 passed 00:03:56.809 00:03:56.809 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.809 suites 1 1 n/a 0 0 00:03:56.809 tests 1 1 1 0 0 00:03:56.809 asserts 15 15 15 0 n/a 00:03:56.809 00:03:56.809 Elapsed time = 0.007 seconds 00:03:56.809 00:03:56.809 real 0m0.145s 00:03:56.809 user 0m0.014s 00:03:56.809 sys 0m0.030s 00:03:56.809 11:28:00 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.809 11:28:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:56.809 ************************************ 00:03:56.809 END TEST env_mem_callbacks 00:03:56.809 ************************************ 00:03:56.809 11:28:00 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.809 00:03:56.809 real 0m2.315s 00:03:56.809 user 0m1.158s 00:03:56.809 sys 0m0.794s 00:03:56.809 11:28:00 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.809 11:28:00 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.809 ************************************ 00:03:56.809 END TEST env 00:03:56.809 ************************************ 00:03:56.809 11:28:00 -- common/autotest_common.sh@1142 -- # return 0 00:03:56.809 11:28:00 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:56.809 11:28:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.809 11:28:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.809 11:28:00 -- common/autotest_common.sh@10 -- # set +x 00:03:56.809 ************************************ 00:03:56.809 START TEST rpc 00:03:56.809 ************************************ 00:03:56.809 11:28:00 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:57.067 * Looking for test storage... 00:03:57.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:57.067 11:28:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58714 00:03:57.067 11:28:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:57.067 11:28:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.067 11:28:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58714 00:03:57.067 11:28:00 rpc -- common/autotest_common.sh@829 -- # '[' -z 58714 ']' 00:03:57.067 11:28:00 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.067 11:28:00 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:57.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.067 11:28:00 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.067 11:28:00 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:57.067 11:28:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.067 [2024-07-12 11:28:00.393510] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:03:57.067 [2024-07-12 11:28:00.393651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58714 ] 00:03:57.325 [2024-07-12 11:28:00.530039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.325 [2024-07-12 11:28:00.653613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:57.325 [2024-07-12 11:28:00.653675] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58714' to capture a snapshot of events at runtime. 00:03:57.325 [2024-07-12 11:28:00.653687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:57.325 [2024-07-12 11:28:00.653696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:57.325 [2024-07-12 11:28:00.653703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58714 for offline analysis/debug. 00:03:57.325 [2024-07-12 11:28:00.653734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.325 [2024-07-12 11:28:00.708767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:03:58.260 11:28:01 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:58.260 11:28:01 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:58.260 11:28:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.260 11:28:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.260 11:28:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:58.260 11:28:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:58.260 11:28:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.260 11:28:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.260 11:28:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.260 ************************************ 00:03:58.260 START TEST rpc_integrity 00:03:58.260 ************************************ 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.260 { 00:03:58.260 "name": "Malloc0", 00:03:58.260 "aliases": [ 00:03:58.260 "9714e425-ee75-490b-820e-fc497cc7d79b" 00:03:58.260 ], 00:03:58.260 "product_name": "Malloc disk", 00:03:58.260 "block_size": 512, 00:03:58.260 "num_blocks": 16384, 00:03:58.260 "uuid": "9714e425-ee75-490b-820e-fc497cc7d79b", 00:03:58.260 "assigned_rate_limits": { 00:03:58.260 "rw_ios_per_sec": 0, 00:03:58.260 "rw_mbytes_per_sec": 0, 00:03:58.260 "r_mbytes_per_sec": 0, 00:03:58.260 "w_mbytes_per_sec": 0 00:03:58.260 }, 00:03:58.260 "claimed": false, 00:03:58.260 "zoned": false, 00:03:58.260 "supported_io_types": { 00:03:58.260 "read": true, 00:03:58.260 "write": true, 00:03:58.260 "unmap": true, 00:03:58.260 "flush": true, 00:03:58.260 "reset": true, 00:03:58.260 "nvme_admin": false, 00:03:58.260 "nvme_io": false, 00:03:58.260 "nvme_io_md": false, 00:03:58.260 "write_zeroes": true, 00:03:58.260 "zcopy": true, 00:03:58.260 "get_zone_info": false, 00:03:58.260 "zone_management": false, 00:03:58.260 "zone_append": false, 00:03:58.260 "compare": false, 00:03:58.260 "compare_and_write": false, 00:03:58.260 "abort": true, 00:03:58.260 "seek_hole": false, 00:03:58.260 "seek_data": false, 00:03:58.260 "copy": true, 00:03:58.260 "nvme_iov_md": false 00:03:58.260 }, 00:03:58.260 "memory_domains": [ 00:03:58.260 { 00:03:58.260 "dma_device_id": "system", 00:03:58.260 "dma_device_type": 1 00:03:58.260 }, 00:03:58.260 { 00:03:58.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.260 "dma_device_type": 2 00:03:58.260 } 00:03:58.260 ], 00:03:58.260 "driver_specific": {} 00:03:58.260 } 00:03:58.260 ]' 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.260 [2024-07-12 11:28:01.530294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:58.260 [2024-07-12 11:28:01.530347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:58.260 [2024-07-12 11:28:01.530368] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7b2da0 00:03:58.260 [2024-07-12 11:28:01.530378] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:58.260 [2024-07-12 11:28:01.532137] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:58.260 [2024-07-12 11:28:01.532171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:58.260 Passthru0 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:58.260 { 00:03:58.260 "name": "Malloc0", 00:03:58.260 "aliases": [ 00:03:58.260 "9714e425-ee75-490b-820e-fc497cc7d79b" 00:03:58.260 ], 00:03:58.260 "product_name": "Malloc disk", 00:03:58.260 "block_size": 512, 00:03:58.260 "num_blocks": 16384, 00:03:58.260 "uuid": "9714e425-ee75-490b-820e-fc497cc7d79b", 00:03:58.260 "assigned_rate_limits": { 00:03:58.260 "rw_ios_per_sec": 0, 00:03:58.260 "rw_mbytes_per_sec": 0, 00:03:58.260 "r_mbytes_per_sec": 0, 00:03:58.260 "w_mbytes_per_sec": 0 00:03:58.260 }, 00:03:58.260 "claimed": true, 00:03:58.260 "claim_type": "exclusive_write", 00:03:58.260 "zoned": false, 00:03:58.260 "supported_io_types": { 00:03:58.260 "read": true, 00:03:58.260 "write": true, 00:03:58.260 "unmap": true, 00:03:58.260 "flush": true, 00:03:58.260 "reset": true, 00:03:58.260 "nvme_admin": false, 00:03:58.260 "nvme_io": false, 00:03:58.260 "nvme_io_md": false, 00:03:58.260 "write_zeroes": true, 00:03:58.260 "zcopy": true, 00:03:58.260 "get_zone_info": false, 00:03:58.260 "zone_management": false, 00:03:58.260 "zone_append": false, 00:03:58.260 "compare": false, 00:03:58.260 "compare_and_write": false, 00:03:58.260 "abort": true, 00:03:58.260 "seek_hole": false, 00:03:58.260 "seek_data": false, 00:03:58.260 "copy": true, 00:03:58.260 "nvme_iov_md": false 00:03:58.260 }, 00:03:58.260 "memory_domains": [ 00:03:58.260 { 00:03:58.260 "dma_device_id": "system", 00:03:58.260 "dma_device_type": 1 00:03:58.260 }, 00:03:58.260 { 00:03:58.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.260 "dma_device_type": 2 00:03:58.260 } 00:03:58.260 ], 00:03:58.260 "driver_specific": {} 00:03:58.260 }, 00:03:58.260 { 00:03:58.260 "name": "Passthru0", 00:03:58.260 "aliases": [ 00:03:58.260 "b3fd0124-a49c-5550-8093-06a124dc6ff9" 00:03:58.260 ], 00:03:58.260 "product_name": "passthru", 00:03:58.260 "block_size": 512, 00:03:58.260 "num_blocks": 16384, 00:03:58.260 "uuid": "b3fd0124-a49c-5550-8093-06a124dc6ff9", 00:03:58.260 "assigned_rate_limits": { 00:03:58.260 "rw_ios_per_sec": 0, 00:03:58.260 "rw_mbytes_per_sec": 0, 00:03:58.260 "r_mbytes_per_sec": 0, 00:03:58.260 "w_mbytes_per_sec": 0 00:03:58.260 }, 00:03:58.260 "claimed": false, 00:03:58.260 "zoned": false, 00:03:58.260 "supported_io_types": { 00:03:58.260 "read": true, 00:03:58.260 "write": true, 00:03:58.260 "unmap": true, 00:03:58.260 "flush": true, 00:03:58.260 "reset": true, 00:03:58.260 "nvme_admin": false, 00:03:58.260 "nvme_io": false, 00:03:58.260 "nvme_io_md": false, 00:03:58.260 "write_zeroes": true, 00:03:58.260 "zcopy": true, 00:03:58.260 "get_zone_info": false, 00:03:58.260 "zone_management": false, 00:03:58.260 "zone_append": false, 00:03:58.260 "compare": false, 00:03:58.260 "compare_and_write": false, 00:03:58.260 "abort": true, 00:03:58.260 "seek_hole": false, 00:03:58.260 "seek_data": false, 00:03:58.260 "copy": true, 00:03:58.260 "nvme_iov_md": false 00:03:58.260 }, 00:03:58.260 "memory_domains": [ 00:03:58.260 { 00:03:58.260 "dma_device_id": "system", 00:03:58.260 "dma_device_type": 1 00:03:58.260 }, 00:03:58.260 { 00:03:58.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.260 "dma_device_type": 2 00:03:58.260 } 00:03:58.260 ], 00:03:58.260 "driver_specific": { 00:03:58.260 "passthru": { 00:03:58.260 "name": "Passthru0", 00:03:58.260 "base_bdev_name": "Malloc0" 00:03:58.260 } 00:03:58.260 } 00:03:58.260 } 00:03:58.260 ]' 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.260 11:28:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.260 00:03:58.260 real 0m0.286s 00:03:58.260 user 0m0.186s 00:03:58.260 sys 0m0.033s 00:03:58.260 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.261 11:28:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.261 ************************************ 00:03:58.261 END TEST rpc_integrity 00:03:58.261 ************************************ 00:03:58.518 11:28:01 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:58.518 11:28:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:58.518 11:28:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.518 11:28:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.518 11:28:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.519 ************************************ 00:03:58.519 START TEST rpc_plugins 00:03:58.519 ************************************ 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:58.519 { 00:03:58.519 "name": "Malloc1", 00:03:58.519 "aliases": [ 00:03:58.519 "d11796b0-978a-43c1-a757-1a429860b5a6" 00:03:58.519 ], 00:03:58.519 "product_name": "Malloc disk", 00:03:58.519 "block_size": 4096, 00:03:58.519 "num_blocks": 256, 00:03:58.519 "uuid": "d11796b0-978a-43c1-a757-1a429860b5a6", 00:03:58.519 "assigned_rate_limits": { 00:03:58.519 "rw_ios_per_sec": 0, 00:03:58.519 "rw_mbytes_per_sec": 0, 00:03:58.519 "r_mbytes_per_sec": 0, 00:03:58.519 "w_mbytes_per_sec": 0 00:03:58.519 }, 00:03:58.519 "claimed": false, 00:03:58.519 "zoned": false, 00:03:58.519 "supported_io_types": { 00:03:58.519 "read": true, 00:03:58.519 "write": true, 00:03:58.519 "unmap": true, 00:03:58.519 "flush": true, 00:03:58.519 "reset": true, 00:03:58.519 "nvme_admin": false, 00:03:58.519 "nvme_io": false, 00:03:58.519 "nvme_io_md": false, 00:03:58.519 "write_zeroes": true, 00:03:58.519 "zcopy": true, 00:03:58.519 "get_zone_info": false, 00:03:58.519 "zone_management": false, 00:03:58.519 "zone_append": false, 00:03:58.519 "compare": false, 00:03:58.519 "compare_and_write": false, 00:03:58.519 "abort": true, 00:03:58.519 "seek_hole": false, 00:03:58.519 "seek_data": false, 00:03:58.519 "copy": true, 00:03:58.519 "nvme_iov_md": false 00:03:58.519 }, 00:03:58.519 "memory_domains": [ 00:03:58.519 { 00:03:58.519 "dma_device_id": "system", 00:03:58.519 "dma_device_type": 1 00:03:58.519 }, 00:03:58.519 { 00:03:58.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.519 "dma_device_type": 2 00:03:58.519 } 00:03:58.519 ], 00:03:58.519 "driver_specific": {} 00:03:58.519 } 00:03:58.519 ]' 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:58.519 11:28:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:58.519 00:03:58.519 real 0m0.133s 00:03:58.519 user 0m0.081s 00:03:58.519 sys 0m0.019s 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.519 11:28:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.519 ************************************ 00:03:58.519 END TEST rpc_plugins 00:03:58.519 ************************************ 00:03:58.519 11:28:01 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:58.519 11:28:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:58.519 11:28:01 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.519 11:28:01 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.519 11:28:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.519 ************************************ 00:03:58.519 START TEST rpc_trace_cmd_test 00:03:58.519 ************************************ 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:58.519 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58714", 00:03:58.519 "tpoint_group_mask": "0x8", 00:03:58.519 "iscsi_conn": { 00:03:58.519 "mask": "0x2", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "scsi": { 00:03:58.519 "mask": "0x4", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "bdev": { 00:03:58.519 "mask": "0x8", 00:03:58.519 "tpoint_mask": "0xffffffffffffffff" 00:03:58.519 }, 00:03:58.519 "nvmf_rdma": { 00:03:58.519 "mask": "0x10", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "nvmf_tcp": { 00:03:58.519 "mask": "0x20", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "ftl": { 00:03:58.519 "mask": "0x40", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "blobfs": { 00:03:58.519 "mask": "0x80", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "dsa": { 00:03:58.519 "mask": "0x200", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "thread": { 00:03:58.519 "mask": "0x400", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "nvme_pcie": { 00:03:58.519 "mask": "0x800", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "iaa": { 00:03:58.519 "mask": "0x1000", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "nvme_tcp": { 00:03:58.519 "mask": "0x2000", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "bdev_nvme": { 00:03:58.519 "mask": "0x4000", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 }, 00:03:58.519 "sock": { 00:03:58.519 "mask": "0x8000", 00:03:58.519 "tpoint_mask": "0x0" 00:03:58.519 } 00:03:58.519 }' 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:58.519 11:28:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:58.777 11:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:58.777 11:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:58.777 11:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:58.777 11:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:58.777 11:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:58.777 11:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:58.777 11:28:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:58.777 00:03:58.777 real 0m0.258s 00:03:58.777 user 0m0.223s 00:03:58.777 sys 0m0.024s 00:03:58.777 ************************************ 00:03:58.777 END TEST rpc_trace_cmd_test 00:03:58.777 11:28:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.777 11:28:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.777 ************************************ 00:03:58.777 11:28:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:58.777 11:28:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:58.777 11:28:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:58.777 11:28:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:58.777 11:28:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.777 11:28:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.777 11:28:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.777 ************************************ 00:03:58.777 START TEST rpc_daemon_integrity 00:03:58.777 ************************************ 00:03:58.777 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:58.777 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.777 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.777 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.777 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.777 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.777 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.035 { 00:03:59.035 "name": "Malloc2", 00:03:59.035 "aliases": [ 00:03:59.035 "4ccdda3b-a2cd-46a4-9703-660851247444" 00:03:59.035 ], 00:03:59.035 "product_name": "Malloc disk", 00:03:59.035 "block_size": 512, 00:03:59.035 "num_blocks": 16384, 00:03:59.035 "uuid": "4ccdda3b-a2cd-46a4-9703-660851247444", 00:03:59.035 "assigned_rate_limits": { 00:03:59.035 "rw_ios_per_sec": 0, 00:03:59.035 "rw_mbytes_per_sec": 0, 00:03:59.035 "r_mbytes_per_sec": 0, 00:03:59.035 "w_mbytes_per_sec": 0 00:03:59.035 }, 00:03:59.035 "claimed": false, 00:03:59.035 "zoned": false, 00:03:59.035 "supported_io_types": { 00:03:59.035 "read": true, 00:03:59.035 "write": true, 00:03:59.035 "unmap": true, 00:03:59.035 "flush": true, 00:03:59.035 "reset": true, 00:03:59.035 "nvme_admin": false, 00:03:59.035 "nvme_io": false, 00:03:59.035 "nvme_io_md": false, 00:03:59.035 "write_zeroes": true, 00:03:59.035 "zcopy": true, 00:03:59.035 "get_zone_info": false, 00:03:59.035 "zone_management": false, 00:03:59.035 "zone_append": false, 00:03:59.035 "compare": false, 00:03:59.035 "compare_and_write": false, 00:03:59.035 "abort": true, 00:03:59.035 "seek_hole": false, 00:03:59.035 "seek_data": false, 00:03:59.035 "copy": true, 00:03:59.035 "nvme_iov_md": false 00:03:59.035 }, 00:03:59.035 "memory_domains": [ 00:03:59.035 { 00:03:59.035 "dma_device_id": "system", 00:03:59.035 "dma_device_type": 1 00:03:59.035 }, 00:03:59.035 { 00:03:59.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.035 "dma_device_type": 2 00:03:59.035 } 00:03:59.035 ], 00:03:59.035 "driver_specific": {} 00:03:59.035 } 00:03:59.035 ]' 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.035 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.036 [2024-07-12 11:28:02.339012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:59.036 [2024-07-12 11:28:02.339076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.036 [2024-07-12 11:28:02.339102] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x817be0 00:03:59.036 [2024-07-12 11:28:02.339113] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.036 [2024-07-12 11:28:02.341088] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.036 [2024-07-12 11:28:02.341127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.036 Passthru0 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.036 { 00:03:59.036 "name": "Malloc2", 00:03:59.036 "aliases": [ 00:03:59.036 "4ccdda3b-a2cd-46a4-9703-660851247444" 00:03:59.036 ], 00:03:59.036 "product_name": "Malloc disk", 00:03:59.036 "block_size": 512, 00:03:59.036 "num_blocks": 16384, 00:03:59.036 "uuid": "4ccdda3b-a2cd-46a4-9703-660851247444", 00:03:59.036 "assigned_rate_limits": { 00:03:59.036 "rw_ios_per_sec": 0, 00:03:59.036 "rw_mbytes_per_sec": 0, 00:03:59.036 "r_mbytes_per_sec": 0, 00:03:59.036 "w_mbytes_per_sec": 0 00:03:59.036 }, 00:03:59.036 "claimed": true, 00:03:59.036 "claim_type": "exclusive_write", 00:03:59.036 "zoned": false, 00:03:59.036 "supported_io_types": { 00:03:59.036 "read": true, 00:03:59.036 "write": true, 00:03:59.036 "unmap": true, 00:03:59.036 "flush": true, 00:03:59.036 "reset": true, 00:03:59.036 "nvme_admin": false, 00:03:59.036 "nvme_io": false, 00:03:59.036 "nvme_io_md": false, 00:03:59.036 "write_zeroes": true, 00:03:59.036 "zcopy": true, 00:03:59.036 "get_zone_info": false, 00:03:59.036 "zone_management": false, 00:03:59.036 "zone_append": false, 00:03:59.036 "compare": false, 00:03:59.036 "compare_and_write": false, 00:03:59.036 "abort": true, 00:03:59.036 "seek_hole": false, 00:03:59.036 "seek_data": false, 00:03:59.036 "copy": true, 00:03:59.036 "nvme_iov_md": false 00:03:59.036 }, 00:03:59.036 "memory_domains": [ 00:03:59.036 { 00:03:59.036 "dma_device_id": "system", 00:03:59.036 "dma_device_type": 1 00:03:59.036 }, 00:03:59.036 { 00:03:59.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.036 "dma_device_type": 2 00:03:59.036 } 00:03:59.036 ], 00:03:59.036 "driver_specific": {} 00:03:59.036 }, 00:03:59.036 { 00:03:59.036 "name": "Passthru0", 00:03:59.036 "aliases": [ 00:03:59.036 "36b8f916-3789-529d-8753-e5f3045c781d" 00:03:59.036 ], 00:03:59.036 "product_name": "passthru", 00:03:59.036 "block_size": 512, 00:03:59.036 "num_blocks": 16384, 00:03:59.036 "uuid": "36b8f916-3789-529d-8753-e5f3045c781d", 00:03:59.036 "assigned_rate_limits": { 00:03:59.036 "rw_ios_per_sec": 0, 00:03:59.036 "rw_mbytes_per_sec": 0, 00:03:59.036 "r_mbytes_per_sec": 0, 00:03:59.036 "w_mbytes_per_sec": 0 00:03:59.036 }, 00:03:59.036 "claimed": false, 00:03:59.036 "zoned": false, 00:03:59.036 "supported_io_types": { 00:03:59.036 "read": true, 00:03:59.036 "write": true, 00:03:59.036 "unmap": true, 00:03:59.036 "flush": true, 00:03:59.036 "reset": true, 00:03:59.036 "nvme_admin": false, 00:03:59.036 "nvme_io": false, 00:03:59.036 "nvme_io_md": false, 00:03:59.036 "write_zeroes": true, 00:03:59.036 "zcopy": true, 00:03:59.036 "get_zone_info": false, 00:03:59.036 "zone_management": false, 00:03:59.036 "zone_append": false, 00:03:59.036 "compare": false, 00:03:59.036 "compare_and_write": false, 00:03:59.036 "abort": true, 00:03:59.036 "seek_hole": false, 00:03:59.036 "seek_data": false, 00:03:59.036 "copy": true, 00:03:59.036 "nvme_iov_md": false 00:03:59.036 }, 00:03:59.036 "memory_domains": [ 00:03:59.036 { 00:03:59.036 "dma_device_id": "system", 00:03:59.036 "dma_device_type": 1 00:03:59.036 }, 00:03:59.036 { 00:03:59.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.036 "dma_device_type": 2 00:03:59.036 } 00:03:59.036 ], 00:03:59.036 "driver_specific": { 00:03:59.036 "passthru": { 00:03:59.036 "name": "Passthru0", 00:03:59.036 "base_bdev_name": "Malloc2" 00:03:59.036 } 00:03:59.036 } 00:03:59.036 } 00:03:59.036 ]' 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:59.036 00:03:59.036 real 0m0.278s 00:03:59.036 user 0m0.180s 00:03:59.036 sys 0m0.037s 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.036 ************************************ 00:03:59.036 END TEST rpc_daemon_integrity 00:03:59.036 ************************************ 00:03:59.036 11:28:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:59.294 11:28:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:59.294 11:28:02 rpc -- rpc/rpc.sh@84 -- # killprocess 58714 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@948 -- # '[' -z 58714 ']' 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@952 -- # kill -0 58714 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@953 -- # uname 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58714 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58714' 00:03:59.294 killing process with pid 58714 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@967 -- # kill 58714 00:03:59.294 11:28:02 rpc -- common/autotest_common.sh@972 -- # wait 58714 00:03:59.551 00:03:59.551 real 0m2.698s 00:03:59.551 user 0m3.447s 00:03:59.551 sys 0m0.660s 00:03:59.551 11:28:02 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.551 ************************************ 00:03:59.551 END TEST rpc 00:03:59.551 11:28:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.551 ************************************ 00:03:59.551 11:28:02 -- common/autotest_common.sh@1142 -- # return 0 00:03:59.551 11:28:02 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:59.551 11:28:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.551 11:28:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.551 11:28:02 -- common/autotest_common.sh@10 -- # set +x 00:03:59.551 ************************************ 00:03:59.551 START TEST skip_rpc 00:03:59.551 ************************************ 00:03:59.551 11:28:02 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:59.808 * Looking for test storage... 00:03:59.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:59.808 11:28:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:59.808 11:28:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:59.808 11:28:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:59.808 11:28:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.808 11:28:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.808 11:28:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.808 ************************************ 00:03:59.808 START TEST skip_rpc 00:03:59.808 ************************************ 00:03:59.808 11:28:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:59.808 11:28:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58901 00:03:59.808 11:28:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.808 11:28:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:59.808 11:28:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:59.808 [2024-07-12 11:28:03.127676] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:03:59.808 [2024-07-12 11:28:03.127765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58901 ] 00:04:00.065 [2024-07-12 11:28:03.259574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.065 [2024-07-12 11:28:03.380264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.065 [2024-07-12 11:28:03.434797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58901 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58901 ']' 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58901 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58901 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.325 killing process with pid 58901 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58901' 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58901 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58901 00:04:05.325 00:04:05.325 real 0m5.417s 00:04:05.325 user 0m5.037s 00:04:05.325 sys 0m0.269s 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.325 11:28:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.325 ************************************ 00:04:05.325 END TEST skip_rpc 00:04:05.325 ************************************ 00:04:05.325 11:28:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:05.325 11:28:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:05.325 11:28:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.325 11:28:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.325 11:28:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.325 ************************************ 00:04:05.325 START TEST skip_rpc_with_json 00:04:05.325 ************************************ 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58993 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58993 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 58993 ']' 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.325 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.325 [2024-07-12 11:28:08.586314] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:05.325 [2024-07-12 11:28:08.586418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58993 ] 00:04:05.325 [2024-07-12 11:28:08.720200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.584 [2024-07-12 11:28:08.841779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.584 [2024-07-12 11:28:08.895997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.169 [2024-07-12 11:28:09.505058] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:06.169 request: 00:04:06.169 { 00:04:06.169 "trtype": "tcp", 00:04:06.169 "method": "nvmf_get_transports", 00:04:06.169 "req_id": 1 00:04:06.169 } 00:04:06.169 Got JSON-RPC error response 00:04:06.169 response: 00:04:06.169 { 00:04:06.169 "code": -19, 00:04:06.169 "message": "No such device" 00:04:06.169 } 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.169 [2024-07-12 11:28:09.517178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.169 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.428 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.428 11:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.428 { 00:04:06.428 "subsystems": [ 00:04:06.428 { 00:04:06.428 "subsystem": "keyring", 00:04:06.428 "config": [] 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "subsystem": "iobuf", 00:04:06.428 "config": [ 00:04:06.428 { 00:04:06.428 "method": "iobuf_set_options", 00:04:06.428 "params": { 00:04:06.428 "small_pool_count": 8192, 00:04:06.428 "large_pool_count": 1024, 00:04:06.428 "small_bufsize": 8192, 00:04:06.428 "large_bufsize": 135168 00:04:06.428 } 00:04:06.428 } 00:04:06.428 ] 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "subsystem": "sock", 00:04:06.428 "config": [ 00:04:06.428 { 00:04:06.428 "method": "sock_set_default_impl", 00:04:06.428 "params": { 00:04:06.428 "impl_name": "uring" 00:04:06.428 } 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "method": "sock_impl_set_options", 00:04:06.428 "params": { 00:04:06.428 "impl_name": "ssl", 00:04:06.428 "recv_buf_size": 4096, 00:04:06.428 "send_buf_size": 4096, 00:04:06.428 "enable_recv_pipe": true, 00:04:06.428 "enable_quickack": false, 00:04:06.428 "enable_placement_id": 0, 00:04:06.428 "enable_zerocopy_send_server": true, 00:04:06.428 "enable_zerocopy_send_client": false, 00:04:06.428 "zerocopy_threshold": 0, 00:04:06.428 "tls_version": 0, 00:04:06.428 "enable_ktls": false 00:04:06.428 } 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "method": "sock_impl_set_options", 00:04:06.428 "params": { 00:04:06.428 "impl_name": "posix", 00:04:06.428 "recv_buf_size": 2097152, 00:04:06.428 "send_buf_size": 2097152, 00:04:06.428 "enable_recv_pipe": true, 00:04:06.428 "enable_quickack": false, 00:04:06.428 "enable_placement_id": 0, 00:04:06.428 "enable_zerocopy_send_server": true, 00:04:06.428 "enable_zerocopy_send_client": false, 00:04:06.428 "zerocopy_threshold": 0, 00:04:06.428 "tls_version": 0, 00:04:06.428 "enable_ktls": false 00:04:06.428 } 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "method": "sock_impl_set_options", 00:04:06.428 "params": { 00:04:06.428 "impl_name": "uring", 00:04:06.428 "recv_buf_size": 2097152, 00:04:06.428 "send_buf_size": 2097152, 00:04:06.428 "enable_recv_pipe": true, 00:04:06.428 "enable_quickack": false, 00:04:06.428 "enable_placement_id": 0, 00:04:06.428 "enable_zerocopy_send_server": false, 00:04:06.428 "enable_zerocopy_send_client": false, 00:04:06.428 "zerocopy_threshold": 0, 00:04:06.428 "tls_version": 0, 00:04:06.428 "enable_ktls": false 00:04:06.428 } 00:04:06.428 } 00:04:06.428 ] 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "subsystem": "vmd", 00:04:06.428 "config": [] 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "subsystem": "accel", 00:04:06.428 "config": [ 00:04:06.428 { 00:04:06.428 "method": "accel_set_options", 00:04:06.428 "params": { 00:04:06.428 "small_cache_size": 128, 00:04:06.428 "large_cache_size": 16, 00:04:06.428 "task_count": 2048, 00:04:06.428 "sequence_count": 2048, 00:04:06.428 "buf_count": 2048 00:04:06.428 } 00:04:06.428 } 00:04:06.428 ] 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "subsystem": "bdev", 00:04:06.428 "config": [ 00:04:06.428 { 00:04:06.428 "method": "bdev_set_options", 00:04:06.428 "params": { 00:04:06.428 "bdev_io_pool_size": 65535, 00:04:06.428 "bdev_io_cache_size": 256, 00:04:06.428 "bdev_auto_examine": true, 00:04:06.428 "iobuf_small_cache_size": 128, 00:04:06.428 "iobuf_large_cache_size": 16 00:04:06.428 } 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "method": "bdev_raid_set_options", 00:04:06.428 "params": { 00:04:06.428 "process_window_size_kb": 1024 00:04:06.428 } 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "method": "bdev_iscsi_set_options", 00:04:06.428 "params": { 00:04:06.428 "timeout_sec": 30 00:04:06.428 } 00:04:06.428 }, 00:04:06.428 { 00:04:06.428 "method": "bdev_nvme_set_options", 00:04:06.428 "params": { 00:04:06.428 "action_on_timeout": "none", 00:04:06.428 "timeout_us": 0, 00:04:06.428 "timeout_admin_us": 0, 00:04:06.428 "keep_alive_timeout_ms": 10000, 00:04:06.428 "arbitration_burst": 0, 00:04:06.428 "low_priority_weight": 0, 00:04:06.428 "medium_priority_weight": 0, 00:04:06.428 "high_priority_weight": 0, 00:04:06.428 "nvme_adminq_poll_period_us": 10000, 00:04:06.428 "nvme_ioq_poll_period_us": 0, 00:04:06.428 "io_queue_requests": 0, 00:04:06.428 "delay_cmd_submit": true, 00:04:06.428 "transport_retry_count": 4, 00:04:06.428 "bdev_retry_count": 3, 00:04:06.428 "transport_ack_timeout": 0, 00:04:06.429 "ctrlr_loss_timeout_sec": 0, 00:04:06.429 "reconnect_delay_sec": 0, 00:04:06.429 "fast_io_fail_timeout_sec": 0, 00:04:06.429 "disable_auto_failback": false, 00:04:06.429 "generate_uuids": false, 00:04:06.429 "transport_tos": 0, 00:04:06.429 "nvme_error_stat": false, 00:04:06.429 "rdma_srq_size": 0, 00:04:06.429 "io_path_stat": false, 00:04:06.429 "allow_accel_sequence": false, 00:04:06.429 "rdma_max_cq_size": 0, 00:04:06.429 "rdma_cm_event_timeout_ms": 0, 00:04:06.429 "dhchap_digests": [ 00:04:06.429 "sha256", 00:04:06.429 "sha384", 00:04:06.429 "sha512" 00:04:06.429 ], 00:04:06.429 "dhchap_dhgroups": [ 00:04:06.429 "null", 00:04:06.429 "ffdhe2048", 00:04:06.429 "ffdhe3072", 00:04:06.429 "ffdhe4096", 00:04:06.429 "ffdhe6144", 00:04:06.429 "ffdhe8192" 00:04:06.429 ] 00:04:06.429 } 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "method": "bdev_nvme_set_hotplug", 00:04:06.429 "params": { 00:04:06.429 "period_us": 100000, 00:04:06.429 "enable": false 00:04:06.429 } 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "method": "bdev_wait_for_examine" 00:04:06.429 } 00:04:06.429 ] 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "subsystem": "scsi", 00:04:06.429 "config": null 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "subsystem": "scheduler", 00:04:06.429 "config": [ 00:04:06.429 { 00:04:06.429 "method": "framework_set_scheduler", 00:04:06.429 "params": { 00:04:06.429 "name": "static" 00:04:06.429 } 00:04:06.429 } 00:04:06.429 ] 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "subsystem": "vhost_scsi", 00:04:06.429 "config": [] 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "subsystem": "vhost_blk", 00:04:06.429 "config": [] 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "subsystem": "ublk", 00:04:06.429 "config": [] 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "subsystem": "nbd", 00:04:06.429 "config": [] 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "subsystem": "nvmf", 00:04:06.429 "config": [ 00:04:06.429 { 00:04:06.429 "method": "nvmf_set_config", 00:04:06.429 "params": { 00:04:06.429 "discovery_filter": "match_any", 00:04:06.429 "admin_cmd_passthru": { 00:04:06.429 "identify_ctrlr": false 00:04:06.429 } 00:04:06.429 } 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "method": "nvmf_set_max_subsystems", 00:04:06.429 "params": { 00:04:06.429 "max_subsystems": 1024 00:04:06.429 } 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "method": "nvmf_set_crdt", 00:04:06.429 "params": { 00:04:06.429 "crdt1": 0, 00:04:06.429 "crdt2": 0, 00:04:06.429 "crdt3": 0 00:04:06.429 } 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "method": "nvmf_create_transport", 00:04:06.429 "params": { 00:04:06.429 "trtype": "TCP", 00:04:06.429 "max_queue_depth": 128, 00:04:06.429 "max_io_qpairs_per_ctrlr": 127, 00:04:06.429 "in_capsule_data_size": 4096, 00:04:06.429 "max_io_size": 131072, 00:04:06.429 "io_unit_size": 131072, 00:04:06.429 "max_aq_depth": 128, 00:04:06.429 "num_shared_buffers": 511, 00:04:06.429 "buf_cache_size": 4294967295, 00:04:06.429 "dif_insert_or_strip": false, 00:04:06.429 "zcopy": false, 00:04:06.429 "c2h_success": true, 00:04:06.429 "sock_priority": 0, 00:04:06.429 "abort_timeout_sec": 1, 00:04:06.429 "ack_timeout": 0, 00:04:06.429 "data_wr_pool_size": 0 00:04:06.429 } 00:04:06.429 } 00:04:06.429 ] 00:04:06.429 }, 00:04:06.429 { 00:04:06.429 "subsystem": "iscsi", 00:04:06.429 "config": [ 00:04:06.429 { 00:04:06.429 "method": "iscsi_set_options", 00:04:06.429 "params": { 00:04:06.429 "node_base": "iqn.2016-06.io.spdk", 00:04:06.429 "max_sessions": 128, 00:04:06.429 "max_connections_per_session": 2, 00:04:06.429 "max_queue_depth": 64, 00:04:06.429 "default_time2wait": 2, 00:04:06.429 "default_time2retain": 20, 00:04:06.429 "first_burst_length": 8192, 00:04:06.429 "immediate_data": true, 00:04:06.429 "allow_duplicated_isid": false, 00:04:06.429 "error_recovery_level": 0, 00:04:06.429 "nop_timeout": 60, 00:04:06.429 "nop_in_interval": 30, 00:04:06.429 "disable_chap": false, 00:04:06.429 "require_chap": false, 00:04:06.429 "mutual_chap": false, 00:04:06.429 "chap_group": 0, 00:04:06.429 "max_large_datain_per_connection": 64, 00:04:06.429 "max_r2t_per_connection": 4, 00:04:06.429 "pdu_pool_size": 36864, 00:04:06.429 "immediate_data_pool_size": 16384, 00:04:06.429 "data_out_pool_size": 2048 00:04:06.429 } 00:04:06.429 } 00:04:06.429 ] 00:04:06.429 } 00:04:06.429 ] 00:04:06.429 } 00:04:06.429 11:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:06.429 11:28:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58993 00:04:06.429 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 58993 ']' 00:04:06.429 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 58993 00:04:06.429 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:06.429 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:06.429 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58993 00:04:06.429 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:06.429 killing process with pid 58993 00:04:06.429 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:06.430 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58993' 00:04:06.430 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 58993 00:04:06.430 11:28:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 58993 00:04:06.688 11:28:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59015 00:04:06.688 11:28:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:06.688 11:28:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59015 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59015 ']' 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59015 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59015 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:11.949 killing process with pid 59015 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59015' 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59015 00:04:11.949 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59015 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:12.207 00:04:12.207 real 0m7.030s 00:04:12.207 user 0m6.666s 00:04:12.207 sys 0m0.671s 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.207 ************************************ 00:04:12.207 END TEST skip_rpc_with_json 00:04:12.207 ************************************ 00:04:12.207 11:28:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:12.207 11:28:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:12.207 11:28:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.207 11:28:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.207 11:28:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.207 ************************************ 00:04:12.207 START TEST skip_rpc_with_delay 00:04:12.207 ************************************ 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:12.207 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:12.466 [2024-07-12 11:28:15.665821] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:12.466 [2024-07-12 11:28:15.665974] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:12.466 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:12.466 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:12.466 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:12.466 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:12.466 00:04:12.466 real 0m0.076s 00:04:12.466 user 0m0.049s 00:04:12.466 sys 0m0.026s 00:04:12.466 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.466 11:28:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:12.466 ************************************ 00:04:12.466 END TEST skip_rpc_with_delay 00:04:12.466 ************************************ 00:04:12.466 11:28:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:12.466 11:28:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:12.466 11:28:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:12.466 11:28:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:12.466 11:28:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.466 11:28:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.466 11:28:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.466 ************************************ 00:04:12.466 START TEST exit_on_failed_rpc_init 00:04:12.466 ************************************ 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59130 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59130 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59130 ']' 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:12.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:12.466 11:28:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.466 [2024-07-12 11:28:15.783190] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:12.466 [2024-07-12 11:28:15.783283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:04:12.724 [2024-07-12 11:28:15.924947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.724 [2024-07-12 11:28:16.055770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.724 [2024-07-12 11:28:16.110069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:13.656 11:28:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:13.656 [2024-07-12 11:28:16.858734] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:13.656 [2024-07-12 11:28:16.858876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59148 ] 00:04:13.656 [2024-07-12 11:28:16.996616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.915 [2024-07-12 11:28:17.141272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.915 [2024-07-12 11:28:17.141395] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:13.915 [2024-07-12 11:28:17.141411] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:13.915 [2024-07-12 11:28:17.141420] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59130 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59130 ']' 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59130 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59130 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:13.915 killing process with pid 59130 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59130' 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59130 00:04:13.915 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59130 00:04:14.482 00:04:14.482 real 0m1.972s 00:04:14.482 user 0m2.345s 00:04:14.482 sys 0m0.425s 00:04:14.482 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.482 11:28:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.482 ************************************ 00:04:14.482 END TEST exit_on_failed_rpc_init 00:04:14.482 ************************************ 00:04:14.482 11:28:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:14.482 11:28:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:14.482 00:04:14.482 real 0m14.754s 00:04:14.482 user 0m14.186s 00:04:14.482 sys 0m1.553s 00:04:14.482 11:28:17 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.482 ************************************ 00:04:14.482 11:28:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.482 END TEST skip_rpc 00:04:14.483 ************************************ 00:04:14.483 11:28:17 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.483 11:28:17 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:14.483 11:28:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.483 11:28:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.483 11:28:17 -- common/autotest_common.sh@10 -- # set +x 00:04:14.483 ************************************ 00:04:14.483 START TEST rpc_client 00:04:14.483 ************************************ 00:04:14.483 11:28:17 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:14.483 * Looking for test storage... 00:04:14.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:14.483 11:28:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:14.483 OK 00:04:14.483 11:28:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:14.483 00:04:14.483 real 0m0.088s 00:04:14.483 user 0m0.042s 00:04:14.483 sys 0m0.051s 00:04:14.483 11:28:17 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.483 11:28:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:14.483 ************************************ 00:04:14.483 END TEST rpc_client 00:04:14.483 ************************************ 00:04:14.483 11:28:17 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.483 11:28:17 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:14.483 11:28:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.483 11:28:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.483 11:28:17 -- common/autotest_common.sh@10 -- # set +x 00:04:14.483 ************************************ 00:04:14.483 START TEST json_config 00:04:14.483 ************************************ 00:04:14.483 11:28:17 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:14.741 11:28:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.741 11:28:17 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:14.741 11:28:17 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.741 11:28:17 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.741 11:28:17 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.741 11:28:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.741 11:28:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.742 11:28:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.742 11:28:17 json_config -- paths/export.sh@5 -- # export PATH 00:04:14.742 11:28:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.742 11:28:17 json_config -- nvmf/common.sh@47 -- # : 0 00:04:14.742 11:28:17 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:14.742 11:28:17 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:14.742 11:28:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.742 11:28:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.742 11:28:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.742 11:28:17 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:14.742 11:28:17 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:14.742 11:28:17 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:14.742 INFO: JSON configuration test init 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.742 11:28:17 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:14.742 11:28:17 json_config -- json_config/common.sh@9 -- # local app=target 00:04:14.742 11:28:17 json_config -- json_config/common.sh@10 -- # shift 00:04:14.742 11:28:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.742 11:28:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.742 11:28:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.742 11:28:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.742 11:28:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.742 11:28:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59266 00:04:14.742 Waiting for target to run... 00:04:14.742 11:28:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.742 11:28:17 json_config -- json_config/common.sh@25 -- # waitforlisten 59266 /var/tmp/spdk_tgt.sock 00:04:14.742 11:28:17 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@829 -- # '[' -z 59266 ']' 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.742 11:28:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.742 [2024-07-12 11:28:18.057185] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:14.742 [2024-07-12 11:28:18.057315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59266 ] 00:04:15.369 [2024-07-12 11:28:18.494875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.369 [2024-07-12 11:28:18.611418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.957 11:28:19 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.957 00:04:15.957 11:28:19 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:15.957 11:28:19 json_config -- json_config/common.sh@26 -- # echo '' 00:04:15.958 11:28:19 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:15.958 11:28:19 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:15.958 11:28:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.958 11:28:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.958 11:28:19 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:15.958 11:28:19 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:15.958 11:28:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:15.958 11:28:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.958 11:28:19 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:15.958 11:28:19 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:15.958 11:28:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:16.216 [2024-07-12 11:28:19.488055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:16.474 11:28:19 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:16.474 11:28:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:16.474 11:28:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:16.474 11:28:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.474 11:28:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:16.474 11:28:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:16.474 11:28:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:16.474 11:28:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:16.474 11:28:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:16.474 11:28:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:16.731 11:28:19 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:16.731 11:28:19 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:16.731 11:28:19 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:16.731 11:28:19 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:16.731 11:28:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:16.731 11:28:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.731 11:28:19 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:16.732 11:28:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:16.732 11:28:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:16.732 11:28:19 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:16.732 11:28:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:16.989 MallocForNvmf0 00:04:16.989 11:28:20 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:16.989 11:28:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.246 MallocForNvmf1 00:04:17.246 11:28:20 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.246 11:28:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.504 [2024-07-12 11:28:20.812414] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.504 11:28:20 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:17.504 11:28:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:17.761 11:28:21 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:17.761 11:28:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:18.019 11:28:21 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.019 11:28:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.277 11:28:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.277 11:28:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.535 [2024-07-12 11:28:21.901014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:18.535 11:28:21 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:18.535 11:28:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.535 11:28:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.535 11:28:21 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:18.535 11:28:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.535 11:28:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.535 11:28:21 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:18.792 11:28:21 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.792 11:28:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.792 MallocBdevForConfigChangeCheck 00:04:18.792 11:28:22 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:18.792 11:28:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.792 11:28:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.050 11:28:22 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:19.050 11:28:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.307 INFO: shutting down applications... 00:04:19.307 11:28:22 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:19.307 11:28:22 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:19.307 11:28:22 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:19.307 11:28:22 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:19.307 11:28:22 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:19.565 Calling clear_iscsi_subsystem 00:04:19.565 Calling clear_nvmf_subsystem 00:04:19.565 Calling clear_nbd_subsystem 00:04:19.565 Calling clear_ublk_subsystem 00:04:19.565 Calling clear_vhost_blk_subsystem 00:04:19.565 Calling clear_vhost_scsi_subsystem 00:04:19.565 Calling clear_bdev_subsystem 00:04:19.565 11:28:22 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:19.565 11:28:22 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:19.565 11:28:22 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:19.565 11:28:22 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:19.565 11:28:22 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.565 11:28:22 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:20.129 11:28:23 json_config -- json_config/json_config.sh@345 -- # break 00:04:20.129 11:28:23 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:20.129 11:28:23 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:20.129 11:28:23 json_config -- json_config/common.sh@31 -- # local app=target 00:04:20.129 11:28:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:20.129 11:28:23 json_config -- json_config/common.sh@35 -- # [[ -n 59266 ]] 00:04:20.129 11:28:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59266 00:04:20.129 11:28:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:20.129 11:28:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.129 11:28:23 json_config -- json_config/common.sh@41 -- # kill -0 59266 00:04:20.129 11:28:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.387 11:28:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.387 11:28:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.387 11:28:23 json_config -- json_config/common.sh@41 -- # kill -0 59266 00:04:20.387 11:28:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.387 11:28:23 json_config -- json_config/common.sh@43 -- # break 00:04:20.387 SPDK target shutdown done 00:04:20.387 11:28:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.387 11:28:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.387 INFO: relaunching applications... 00:04:20.387 11:28:23 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:20.387 11:28:23 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.387 11:28:23 json_config -- json_config/common.sh@9 -- # local app=target 00:04:20.387 11:28:23 json_config -- json_config/common.sh@10 -- # shift 00:04:20.387 11:28:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.387 11:28:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.387 11:28:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.387 11:28:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.387 11:28:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.387 11:28:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59457 00:04:20.387 11:28:23 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.387 Waiting for target to run... 00:04:20.387 11:28:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.387 11:28:23 json_config -- json_config/common.sh@25 -- # waitforlisten 59457 /var/tmp/spdk_tgt.sock 00:04:20.387 11:28:23 json_config -- common/autotest_common.sh@829 -- # '[' -z 59457 ']' 00:04:20.387 11:28:23 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.387 11:28:23 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:20.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.387 11:28:23 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.387 11:28:23 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:20.387 11:28:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.645 [2024-07-12 11:28:23.868242] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:20.645 [2024-07-12 11:28:23.868329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59457 ] 00:04:20.902 [2024-07-12 11:28:24.308427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.163 [2024-07-12 11:28:24.410992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.163 [2024-07-12 11:28:24.537346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:21.420 [2024-07-12 11:28:24.748706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.420 [2024-07-12 11:28:24.780784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:21.677 11:28:24 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:21.677 11:28:24 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:21.677 00:04:21.677 11:28:24 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.677 11:28:24 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:21.677 INFO: Checking if target configuration is the same... 00:04:21.677 11:28:24 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:21.677 11:28:24 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.677 11:28:24 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:21.677 11:28:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.677 + '[' 2 -ne 2 ']' 00:04:21.677 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:21.677 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:21.677 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:21.677 +++ basename /dev/fd/62 00:04:21.677 ++ mktemp /tmp/62.XXX 00:04:21.677 + tmp_file_1=/tmp/62.EEU 00:04:21.677 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.677 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.677 + tmp_file_2=/tmp/spdk_tgt_config.json.q3y 00:04:21.677 + ret=0 00:04:21.677 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.934 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.192 + diff -u /tmp/62.EEU /tmp/spdk_tgt_config.json.q3y 00:04:22.192 INFO: JSON config files are the same 00:04:22.192 + echo 'INFO: JSON config files are the same' 00:04:22.192 + rm /tmp/62.EEU /tmp/spdk_tgt_config.json.q3y 00:04:22.192 + exit 0 00:04:22.192 11:28:25 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:22.192 INFO: changing configuration and checking if this can be detected... 00:04:22.192 11:28:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:22.192 11:28:25 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:22.192 11:28:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:22.450 11:28:25 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.450 11:28:25 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:22.450 11:28:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.450 + '[' 2 -ne 2 ']' 00:04:22.450 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:22.450 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:22.450 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:22.450 +++ basename /dev/fd/62 00:04:22.450 ++ mktemp /tmp/62.XXX 00:04:22.450 + tmp_file_1=/tmp/62.nOd 00:04:22.450 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:22.450 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:22.450 + tmp_file_2=/tmp/spdk_tgt_config.json.Npw 00:04:22.450 + ret=0 00:04:22.450 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.708 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:22.708 + diff -u /tmp/62.nOd /tmp/spdk_tgt_config.json.Npw 00:04:22.708 + ret=1 00:04:22.708 + echo '=== Start of file: /tmp/62.nOd ===' 00:04:22.708 + cat /tmp/62.nOd 00:04:22.708 + echo '=== End of file: /tmp/62.nOd ===' 00:04:22.708 + echo '' 00:04:22.708 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Npw ===' 00:04:22.708 + cat /tmp/spdk_tgt_config.json.Npw 00:04:22.708 + echo '=== End of file: /tmp/spdk_tgt_config.json.Npw ===' 00:04:22.708 + echo '' 00:04:22.708 + rm /tmp/62.nOd /tmp/spdk_tgt_config.json.Npw 00:04:22.708 + exit 1 00:04:22.708 INFO: configuration change detected. 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:22.708 11:28:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.708 11:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@317 -- # [[ -n 59457 ]] 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:22.708 11:28:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.708 11:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:22.708 11:28:26 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:22.708 11:28:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.708 11:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.965 11:28:26 json_config -- json_config/json_config.sh@323 -- # killprocess 59457 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@948 -- # '[' -z 59457 ']' 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@952 -- # kill -0 59457 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@953 -- # uname 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59457 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:22.965 killing process with pid 59457 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59457' 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@967 -- # kill 59457 00:04:22.965 11:28:26 json_config -- common/autotest_common.sh@972 -- # wait 59457 00:04:23.223 11:28:26 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:23.223 11:28:26 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:23.223 11:28:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.223 11:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.223 11:28:26 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:23.223 INFO: Success 00:04:23.223 11:28:26 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:23.223 00:04:23.223 real 0m8.560s 00:04:23.223 user 0m12.438s 00:04:23.223 sys 0m1.709s 00:04:23.223 11:28:26 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.223 ************************************ 00:04:23.223 END TEST json_config 00:04:23.223 ************************************ 00:04:23.223 11:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.223 11:28:26 -- common/autotest_common.sh@1142 -- # return 0 00:04:23.223 11:28:26 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:23.223 11:28:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.223 11:28:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.223 11:28:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.223 ************************************ 00:04:23.223 START TEST json_config_extra_key 00:04:23.223 ************************************ 00:04:23.223 11:28:26 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:23.223 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:23.223 11:28:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:23.223 11:28:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.223 11:28:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.223 11:28:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.223 11:28:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:23.224 11:28:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.224 11:28:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.224 11:28:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.224 11:28:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.224 11:28:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.224 11:28:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.224 11:28:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:23.224 11:28:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:23.224 11:28:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:23.224 INFO: launching applications... 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:23.224 11:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59597 00:04:23.224 Waiting for target to run... 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.224 11:28:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59597 /var/tmp/spdk_tgt.sock 00:04:23.224 11:28:26 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59597 ']' 00:04:23.224 11:28:26 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.224 11:28:26 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.224 11:28:26 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.224 11:28:26 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.224 11:28:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.224 [2024-07-12 11:28:26.661151] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:23.224 [2024-07-12 11:28:26.661278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59597 ] 00:04:23.789 [2024-07-12 11:28:27.074612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.789 [2024-07-12 11:28:27.168240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.789 [2024-07-12 11:28:27.188930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:24.354 11:28:27 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.354 00:04:24.354 INFO: shutting down applications... 00:04:24.354 11:28:27 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:24.354 11:28:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:24.354 11:28:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:24.354 11:28:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:24.354 11:28:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:24.354 11:28:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.354 11:28:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59597 ]] 00:04:24.354 11:28:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59597 00:04:24.354 11:28:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.354 11:28:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.354 11:28:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59597 00:04:24.354 11:28:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.919 11:28:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.919 11:28:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.919 11:28:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59597 00:04:24.919 11:28:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.919 11:28:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:24.919 11:28:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.919 SPDK target shutdown done 00:04:24.919 11:28:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.919 Success 00:04:24.919 11:28:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:24.919 ************************************ 00:04:24.919 END TEST json_config_extra_key 00:04:24.919 ************************************ 00:04:24.919 00:04:24.919 real 0m1.596s 00:04:24.919 user 0m1.483s 00:04:24.919 sys 0m0.431s 00:04:24.919 11:28:28 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.919 11:28:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:24.919 11:28:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:24.919 11:28:28 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.919 11:28:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.919 11:28:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.919 11:28:28 -- common/autotest_common.sh@10 -- # set +x 00:04:24.919 ************************************ 00:04:24.919 START TEST alias_rpc 00:04:24.919 ************************************ 00:04:24.919 11:28:28 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.919 * Looking for test storage... 00:04:24.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:24.919 11:28:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:24.919 11:28:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59667 00:04:24.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.919 11:28:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59667 00:04:24.919 11:28:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:24.919 11:28:28 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59667 ']' 00:04:24.919 11:28:28 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.919 11:28:28 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.919 11:28:28 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.919 11:28:28 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.919 11:28:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.919 [2024-07-12 11:28:28.291342] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:24.919 [2024-07-12 11:28:28.291430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59667 ] 00:04:25.176 [2024-07-12 11:28:28.427974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.176 [2024-07-12 11:28:28.557735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.176 [2024-07-12 11:28:28.614969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:26.148 11:28:29 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.148 11:28:29 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:26.148 11:28:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:26.148 11:28:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59667 00:04:26.148 11:28:29 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59667 ']' 00:04:26.148 11:28:29 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59667 00:04:26.148 11:28:29 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:26.148 11:28:29 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.148 11:28:29 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59667 00:04:26.405 killing process with pid 59667 00:04:26.405 11:28:29 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.405 11:28:29 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.405 11:28:29 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59667' 00:04:26.405 11:28:29 alias_rpc -- common/autotest_common.sh@967 -- # kill 59667 00:04:26.405 11:28:29 alias_rpc -- common/autotest_common.sh@972 -- # wait 59667 00:04:26.662 ************************************ 00:04:26.662 END TEST alias_rpc 00:04:26.662 ************************************ 00:04:26.662 00:04:26.662 real 0m1.862s 00:04:26.662 user 0m2.143s 00:04:26.662 sys 0m0.443s 00:04:26.662 11:28:30 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.662 11:28:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.662 11:28:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.662 11:28:30 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:26.662 11:28:30 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:26.662 11:28:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.662 11:28:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.662 11:28:30 -- common/autotest_common.sh@10 -- # set +x 00:04:26.662 ************************************ 00:04:26.662 START TEST spdkcli_tcp 00:04:26.662 ************************************ 00:04:26.662 11:28:30 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:26.920 * Looking for test storage... 00:04:26.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:26.920 11:28:30 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.920 11:28:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59742 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:26.920 11:28:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59742 00:04:26.920 11:28:30 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59742 ']' 00:04:26.920 11:28:30 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.920 11:28:30 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.920 11:28:30 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.920 11:28:30 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.920 11:28:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.920 [2024-07-12 11:28:30.180574] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:26.921 [2024-07-12 11:28:30.180732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59742 ] 00:04:26.921 [2024-07-12 11:28:30.311290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.178 [2024-07-12 11:28:30.432308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.178 [2024-07-12 11:28:30.432320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.178 [2024-07-12 11:28:30.487078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:27.743 11:28:31 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.743 11:28:31 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:27.743 11:28:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59759 00:04:27.743 11:28:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:27.743 11:28:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:28.001 [ 00:04:28.001 "bdev_malloc_delete", 00:04:28.001 "bdev_malloc_create", 00:04:28.001 "bdev_null_resize", 00:04:28.001 "bdev_null_delete", 00:04:28.001 "bdev_null_create", 00:04:28.001 "bdev_nvme_cuse_unregister", 00:04:28.001 "bdev_nvme_cuse_register", 00:04:28.001 "bdev_opal_new_user", 00:04:28.001 "bdev_opal_set_lock_state", 00:04:28.001 "bdev_opal_delete", 00:04:28.001 "bdev_opal_get_info", 00:04:28.001 "bdev_opal_create", 00:04:28.001 "bdev_nvme_opal_revert", 00:04:28.001 "bdev_nvme_opal_init", 00:04:28.001 "bdev_nvme_send_cmd", 00:04:28.001 "bdev_nvme_get_path_iostat", 00:04:28.001 "bdev_nvme_get_mdns_discovery_info", 00:04:28.001 "bdev_nvme_stop_mdns_discovery", 00:04:28.001 "bdev_nvme_start_mdns_discovery", 00:04:28.001 "bdev_nvme_set_multipath_policy", 00:04:28.001 "bdev_nvme_set_preferred_path", 00:04:28.001 "bdev_nvme_get_io_paths", 00:04:28.001 "bdev_nvme_remove_error_injection", 00:04:28.001 "bdev_nvme_add_error_injection", 00:04:28.001 "bdev_nvme_get_discovery_info", 00:04:28.001 "bdev_nvme_stop_discovery", 00:04:28.001 "bdev_nvme_start_discovery", 00:04:28.001 "bdev_nvme_get_controller_health_info", 00:04:28.001 "bdev_nvme_disable_controller", 00:04:28.001 "bdev_nvme_enable_controller", 00:04:28.001 "bdev_nvme_reset_controller", 00:04:28.001 "bdev_nvme_get_transport_statistics", 00:04:28.001 "bdev_nvme_apply_firmware", 00:04:28.001 "bdev_nvme_detach_controller", 00:04:28.001 "bdev_nvme_get_controllers", 00:04:28.001 "bdev_nvme_attach_controller", 00:04:28.001 "bdev_nvme_set_hotplug", 00:04:28.001 "bdev_nvme_set_options", 00:04:28.001 "bdev_passthru_delete", 00:04:28.001 "bdev_passthru_create", 00:04:28.001 "bdev_lvol_set_parent_bdev", 00:04:28.001 "bdev_lvol_set_parent", 00:04:28.001 "bdev_lvol_check_shallow_copy", 00:04:28.001 "bdev_lvol_start_shallow_copy", 00:04:28.001 "bdev_lvol_grow_lvstore", 00:04:28.001 "bdev_lvol_get_lvols", 00:04:28.001 "bdev_lvol_get_lvstores", 00:04:28.001 "bdev_lvol_delete", 00:04:28.001 "bdev_lvol_set_read_only", 00:04:28.001 "bdev_lvol_resize", 00:04:28.001 "bdev_lvol_decouple_parent", 00:04:28.001 "bdev_lvol_inflate", 00:04:28.001 "bdev_lvol_rename", 00:04:28.001 "bdev_lvol_clone_bdev", 00:04:28.001 "bdev_lvol_clone", 00:04:28.001 "bdev_lvol_snapshot", 00:04:28.001 "bdev_lvol_create", 00:04:28.001 "bdev_lvol_delete_lvstore", 00:04:28.001 "bdev_lvol_rename_lvstore", 00:04:28.001 "bdev_lvol_create_lvstore", 00:04:28.001 "bdev_raid_set_options", 00:04:28.001 "bdev_raid_remove_base_bdev", 00:04:28.001 "bdev_raid_add_base_bdev", 00:04:28.001 "bdev_raid_delete", 00:04:28.001 "bdev_raid_create", 00:04:28.001 "bdev_raid_get_bdevs", 00:04:28.001 "bdev_error_inject_error", 00:04:28.001 "bdev_error_delete", 00:04:28.001 "bdev_error_create", 00:04:28.001 "bdev_split_delete", 00:04:28.001 "bdev_split_create", 00:04:28.001 "bdev_delay_delete", 00:04:28.001 "bdev_delay_create", 00:04:28.001 "bdev_delay_update_latency", 00:04:28.001 "bdev_zone_block_delete", 00:04:28.001 "bdev_zone_block_create", 00:04:28.001 "blobfs_create", 00:04:28.001 "blobfs_detect", 00:04:28.001 "blobfs_set_cache_size", 00:04:28.001 "bdev_aio_delete", 00:04:28.001 "bdev_aio_rescan", 00:04:28.001 "bdev_aio_create", 00:04:28.001 "bdev_ftl_set_property", 00:04:28.001 "bdev_ftl_get_properties", 00:04:28.001 "bdev_ftl_get_stats", 00:04:28.001 "bdev_ftl_unmap", 00:04:28.001 "bdev_ftl_unload", 00:04:28.001 "bdev_ftl_delete", 00:04:28.001 "bdev_ftl_load", 00:04:28.001 "bdev_ftl_create", 00:04:28.001 "bdev_virtio_attach_controller", 00:04:28.001 "bdev_virtio_scsi_get_devices", 00:04:28.001 "bdev_virtio_detach_controller", 00:04:28.001 "bdev_virtio_blk_set_hotplug", 00:04:28.001 "bdev_iscsi_delete", 00:04:28.001 "bdev_iscsi_create", 00:04:28.001 "bdev_iscsi_set_options", 00:04:28.001 "bdev_uring_delete", 00:04:28.001 "bdev_uring_rescan", 00:04:28.001 "bdev_uring_create", 00:04:28.001 "accel_error_inject_error", 00:04:28.001 "ioat_scan_accel_module", 00:04:28.001 "dsa_scan_accel_module", 00:04:28.001 "iaa_scan_accel_module", 00:04:28.001 "keyring_file_remove_key", 00:04:28.001 "keyring_file_add_key", 00:04:28.001 "keyring_linux_set_options", 00:04:28.001 "iscsi_get_histogram", 00:04:28.001 "iscsi_enable_histogram", 00:04:28.001 "iscsi_set_options", 00:04:28.002 "iscsi_get_auth_groups", 00:04:28.002 "iscsi_auth_group_remove_secret", 00:04:28.002 "iscsi_auth_group_add_secret", 00:04:28.002 "iscsi_delete_auth_group", 00:04:28.002 "iscsi_create_auth_group", 00:04:28.002 "iscsi_set_discovery_auth", 00:04:28.002 "iscsi_get_options", 00:04:28.002 "iscsi_target_node_request_logout", 00:04:28.002 "iscsi_target_node_set_redirect", 00:04:28.002 "iscsi_target_node_set_auth", 00:04:28.002 "iscsi_target_node_add_lun", 00:04:28.002 "iscsi_get_stats", 00:04:28.002 "iscsi_get_connections", 00:04:28.002 "iscsi_portal_group_set_auth", 00:04:28.002 "iscsi_start_portal_group", 00:04:28.002 "iscsi_delete_portal_group", 00:04:28.002 "iscsi_create_portal_group", 00:04:28.002 "iscsi_get_portal_groups", 00:04:28.002 "iscsi_delete_target_node", 00:04:28.002 "iscsi_target_node_remove_pg_ig_maps", 00:04:28.002 "iscsi_target_node_add_pg_ig_maps", 00:04:28.002 "iscsi_create_target_node", 00:04:28.002 "iscsi_get_target_nodes", 00:04:28.002 "iscsi_delete_initiator_group", 00:04:28.002 "iscsi_initiator_group_remove_initiators", 00:04:28.002 "iscsi_initiator_group_add_initiators", 00:04:28.002 "iscsi_create_initiator_group", 00:04:28.002 "iscsi_get_initiator_groups", 00:04:28.002 "nvmf_set_crdt", 00:04:28.002 "nvmf_set_config", 00:04:28.002 "nvmf_set_max_subsystems", 00:04:28.002 "nvmf_stop_mdns_prr", 00:04:28.002 "nvmf_publish_mdns_prr", 00:04:28.002 "nvmf_subsystem_get_listeners", 00:04:28.002 "nvmf_subsystem_get_qpairs", 00:04:28.002 "nvmf_subsystem_get_controllers", 00:04:28.002 "nvmf_get_stats", 00:04:28.002 "nvmf_get_transports", 00:04:28.002 "nvmf_create_transport", 00:04:28.002 "nvmf_get_targets", 00:04:28.002 "nvmf_delete_target", 00:04:28.002 "nvmf_create_target", 00:04:28.002 "nvmf_subsystem_allow_any_host", 00:04:28.002 "nvmf_subsystem_remove_host", 00:04:28.002 "nvmf_subsystem_add_host", 00:04:28.002 "nvmf_ns_remove_host", 00:04:28.002 "nvmf_ns_add_host", 00:04:28.002 "nvmf_subsystem_remove_ns", 00:04:28.002 "nvmf_subsystem_add_ns", 00:04:28.002 "nvmf_subsystem_listener_set_ana_state", 00:04:28.002 "nvmf_discovery_get_referrals", 00:04:28.002 "nvmf_discovery_remove_referral", 00:04:28.002 "nvmf_discovery_add_referral", 00:04:28.002 "nvmf_subsystem_remove_listener", 00:04:28.002 "nvmf_subsystem_add_listener", 00:04:28.002 "nvmf_delete_subsystem", 00:04:28.002 "nvmf_create_subsystem", 00:04:28.002 "nvmf_get_subsystems", 00:04:28.002 "env_dpdk_get_mem_stats", 00:04:28.002 "nbd_get_disks", 00:04:28.002 "nbd_stop_disk", 00:04:28.002 "nbd_start_disk", 00:04:28.002 "ublk_recover_disk", 00:04:28.002 "ublk_get_disks", 00:04:28.002 "ublk_stop_disk", 00:04:28.002 "ublk_start_disk", 00:04:28.002 "ublk_destroy_target", 00:04:28.002 "ublk_create_target", 00:04:28.002 "virtio_blk_create_transport", 00:04:28.002 "virtio_blk_get_transports", 00:04:28.002 "vhost_controller_set_coalescing", 00:04:28.002 "vhost_get_controllers", 00:04:28.002 "vhost_delete_controller", 00:04:28.002 "vhost_create_blk_controller", 00:04:28.002 "vhost_scsi_controller_remove_target", 00:04:28.002 "vhost_scsi_controller_add_target", 00:04:28.002 "vhost_start_scsi_controller", 00:04:28.002 "vhost_create_scsi_controller", 00:04:28.002 "thread_set_cpumask", 00:04:28.002 "framework_get_governor", 00:04:28.002 "framework_get_scheduler", 00:04:28.002 "framework_set_scheduler", 00:04:28.002 "framework_get_reactors", 00:04:28.002 "thread_get_io_channels", 00:04:28.002 "thread_get_pollers", 00:04:28.002 "thread_get_stats", 00:04:28.002 "framework_monitor_context_switch", 00:04:28.002 "spdk_kill_instance", 00:04:28.002 "log_enable_timestamps", 00:04:28.002 "log_get_flags", 00:04:28.002 "log_clear_flag", 00:04:28.002 "log_set_flag", 00:04:28.002 "log_get_level", 00:04:28.002 "log_set_level", 00:04:28.002 "log_get_print_level", 00:04:28.002 "log_set_print_level", 00:04:28.002 "framework_enable_cpumask_locks", 00:04:28.002 "framework_disable_cpumask_locks", 00:04:28.002 "framework_wait_init", 00:04:28.002 "framework_start_init", 00:04:28.002 "scsi_get_devices", 00:04:28.002 "bdev_get_histogram", 00:04:28.002 "bdev_enable_histogram", 00:04:28.002 "bdev_set_qos_limit", 00:04:28.002 "bdev_set_qd_sampling_period", 00:04:28.002 "bdev_get_bdevs", 00:04:28.002 "bdev_reset_iostat", 00:04:28.002 "bdev_get_iostat", 00:04:28.002 "bdev_examine", 00:04:28.002 "bdev_wait_for_examine", 00:04:28.002 "bdev_set_options", 00:04:28.002 "notify_get_notifications", 00:04:28.002 "notify_get_types", 00:04:28.002 "accel_get_stats", 00:04:28.002 "accel_set_options", 00:04:28.002 "accel_set_driver", 00:04:28.002 "accel_crypto_key_destroy", 00:04:28.002 "accel_crypto_keys_get", 00:04:28.002 "accel_crypto_key_create", 00:04:28.002 "accel_assign_opc", 00:04:28.002 "accel_get_module_info", 00:04:28.002 "accel_get_opc_assignments", 00:04:28.002 "vmd_rescan", 00:04:28.002 "vmd_remove_device", 00:04:28.002 "vmd_enable", 00:04:28.002 "sock_get_default_impl", 00:04:28.002 "sock_set_default_impl", 00:04:28.002 "sock_impl_set_options", 00:04:28.002 "sock_impl_get_options", 00:04:28.002 "iobuf_get_stats", 00:04:28.002 "iobuf_set_options", 00:04:28.002 "framework_get_pci_devices", 00:04:28.002 "framework_get_config", 00:04:28.002 "framework_get_subsystems", 00:04:28.002 "trace_get_info", 00:04:28.002 "trace_get_tpoint_group_mask", 00:04:28.002 "trace_disable_tpoint_group", 00:04:28.002 "trace_enable_tpoint_group", 00:04:28.002 "trace_clear_tpoint_mask", 00:04:28.002 "trace_set_tpoint_mask", 00:04:28.002 "keyring_get_keys", 00:04:28.002 "spdk_get_version", 00:04:28.002 "rpc_get_methods" 00:04:28.002 ] 00:04:28.002 11:28:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:28.002 11:28:31 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.002 11:28:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.260 11:28:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:28.260 11:28:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59742 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59742 ']' 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59742 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59742 00:04:28.260 killing process with pid 59742 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59742' 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59742 00:04:28.260 11:28:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59742 00:04:28.518 ************************************ 00:04:28.518 END TEST spdkcli_tcp 00:04:28.518 ************************************ 00:04:28.518 00:04:28.518 real 0m1.834s 00:04:28.518 user 0m3.466s 00:04:28.518 sys 0m0.430s 00:04:28.518 11:28:31 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.518 11:28:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:28.518 11:28:31 -- common/autotest_common.sh@1142 -- # return 0 00:04:28.518 11:28:31 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:28.518 11:28:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.518 11:28:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.518 11:28:31 -- common/autotest_common.sh@10 -- # set +x 00:04:28.518 ************************************ 00:04:28.518 START TEST dpdk_mem_utility 00:04:28.518 ************************************ 00:04:28.518 11:28:31 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:28.776 * Looking for test storage... 00:04:28.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:28.776 11:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:28.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.776 11:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59829 00:04:28.776 11:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59829 00:04:28.776 11:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.776 11:28:31 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59829 ']' 00:04:28.776 11:28:31 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.776 11:28:31 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.776 11:28:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.776 11:28:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.776 11:28:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.776 [2024-07-12 11:28:32.064173] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:28.776 [2024-07-12 11:28:32.064931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59829 ] 00:04:28.776 [2024-07-12 11:28:32.197228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.034 [2024-07-12 11:28:32.326260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.034 [2024-07-12 11:28:32.386586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:29.601 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.601 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:29.601 11:28:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:29.601 11:28:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:29.601 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.601 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:29.601 { 00:04:29.601 "filename": "/tmp/spdk_mem_dump.txt" 00:04:29.601 } 00:04:29.601 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.601 11:28:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:29.860 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:29.860 1 heaps totaling size 814.000000 MiB 00:04:29.860 size: 814.000000 MiB heap id: 0 00:04:29.860 end heaps---------- 00:04:29.860 8 mempools totaling size 598.116089 MiB 00:04:29.860 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:29.860 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:29.860 size: 84.521057 MiB name: bdev_io_59829 00:04:29.860 size: 51.011292 MiB name: evtpool_59829 00:04:29.860 size: 50.003479 MiB name: msgpool_59829 00:04:29.860 size: 21.763794 MiB name: PDU_Pool 00:04:29.860 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:29.860 size: 0.026123 MiB name: Session_Pool 00:04:29.860 end mempools------- 00:04:29.860 6 memzones totaling size 4.142822 MiB 00:04:29.860 size: 1.000366 MiB name: RG_ring_0_59829 00:04:29.860 size: 1.000366 MiB name: RG_ring_1_59829 00:04:29.860 size: 1.000366 MiB name: RG_ring_4_59829 00:04:29.860 size: 1.000366 MiB name: RG_ring_5_59829 00:04:29.860 size: 0.125366 MiB name: RG_ring_2_59829 00:04:29.860 size: 0.015991 MiB name: RG_ring_3_59829 00:04:29.860 end memzones------- 00:04:29.860 11:28:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:29.860 heap id: 0 total size: 814.000000 MiB number of busy elements: 305 number of free elements: 15 00:04:29.860 list of free elements. size: 12.471008 MiB 00:04:29.860 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:29.860 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:29.860 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:29.860 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:29.860 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:29.860 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:29.860 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:29.860 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:29.860 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:29.860 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:04:29.860 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:29.860 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:29.860 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:29.860 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:29.860 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:29.860 list of standard malloc elements. size: 199.266418 MiB 00:04:29.860 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:29.860 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:29.860 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:29.860 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:29.860 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:29.860 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:29.860 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:29.860 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:29.860 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:29.860 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:29.860 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:29.861 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:29.861 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:29.861 list of memzone associated elements. size: 602.262573 MiB 00:04:29.861 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:29.861 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:29.861 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:29.861 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:29.861 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:29.861 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59829_0 00:04:29.861 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:29.861 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59829_0 00:04:29.861 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:29.861 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59829_0 00:04:29.861 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:29.861 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:29.861 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:29.861 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:29.861 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:29.861 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59829 00:04:29.861 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:29.861 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59829 00:04:29.861 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:29.861 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59829 00:04:29.861 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:29.861 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:29.861 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:29.861 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:29.861 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:29.861 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:29.861 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:29.861 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:29.861 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:29.861 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59829 00:04:29.861 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:29.861 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59829 00:04:29.861 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:29.861 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59829 00:04:29.861 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:29.861 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59829 00:04:29.861 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:29.861 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59829 00:04:29.861 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:29.861 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:29.861 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:29.861 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:29.861 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:29.861 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:29.862 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:29.862 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59829 00:04:29.862 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:29.862 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:29.862 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:29.862 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:29.862 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:29.862 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59829 00:04:29.862 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:29.862 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:29.862 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:29.862 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59829 00:04:29.862 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:29.862 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59829 00:04:29.862 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:29.862 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:29.862 11:28:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:29.862 11:28:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59829 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59829 ']' 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59829 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59829 00:04:29.862 killing process with pid 59829 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59829' 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59829 00:04:29.862 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59829 00:04:30.428 ************************************ 00:04:30.428 END TEST dpdk_mem_utility 00:04:30.428 ************************************ 00:04:30.428 00:04:30.428 real 0m1.663s 00:04:30.428 user 0m1.793s 00:04:30.428 sys 0m0.417s 00:04:30.428 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.428 11:28:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:30.428 11:28:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:30.428 11:28:33 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:30.428 11:28:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.428 11:28:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.428 11:28:33 -- common/autotest_common.sh@10 -- # set +x 00:04:30.428 ************************************ 00:04:30.429 START TEST event 00:04:30.429 ************************************ 00:04:30.429 11:28:33 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:30.429 * Looking for test storage... 00:04:30.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:30.429 11:28:33 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:30.429 11:28:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:30.429 11:28:33 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:30.429 11:28:33 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:30.429 11:28:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.429 11:28:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.429 ************************************ 00:04:30.429 START TEST event_perf 00:04:30.429 ************************************ 00:04:30.429 11:28:33 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:30.429 Running I/O for 1 seconds...[2024-07-12 11:28:33.745316] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:30.429 [2024-07-12 11:28:33.745443] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59900 ] 00:04:30.686 [2024-07-12 11:28:33.885837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:30.686 [2024-07-12 11:28:34.027948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.686 [2024-07-12 11:28:34.028046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:30.686 [2024-07-12 11:28:34.028138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:30.686 [2024-07-12 11:28:34.028143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.059 Running I/O for 1 seconds... 00:04:32.059 lcore 0: 193735 00:04:32.059 lcore 1: 193736 00:04:32.059 lcore 2: 193737 00:04:32.059 lcore 3: 193737 00:04:32.059 done. 00:04:32.059 00:04:32.059 real 0m1.401s 00:04:32.059 user 0m4.195s 00:04:32.059 sys 0m0.072s 00:04:32.059 11:28:35 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.059 11:28:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:32.059 ************************************ 00:04:32.059 END TEST event_perf 00:04:32.059 ************************************ 00:04:32.059 11:28:35 event -- common/autotest_common.sh@1142 -- # return 0 00:04:32.059 11:28:35 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:32.059 11:28:35 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:32.059 11:28:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.059 11:28:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.059 ************************************ 00:04:32.059 START TEST event_reactor 00:04:32.059 ************************************ 00:04:32.059 11:28:35 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:32.059 [2024-07-12 11:28:35.189484] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:32.059 [2024-07-12 11:28:35.189918] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59939 ] 00:04:32.059 [2024-07-12 11:28:35.323731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.059 [2024-07-12 11:28:35.444159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.431 test_start 00:04:33.431 oneshot 00:04:33.431 tick 100 00:04:33.431 tick 100 00:04:33.431 tick 250 00:04:33.431 tick 100 00:04:33.431 tick 100 00:04:33.431 tick 100 00:04:33.431 tick 250 00:04:33.431 tick 500 00:04:33.431 tick 100 00:04:33.431 tick 100 00:04:33.431 tick 250 00:04:33.431 tick 100 00:04:33.431 tick 100 00:04:33.431 test_end 00:04:33.431 ************************************ 00:04:33.431 END TEST event_reactor 00:04:33.431 ************************************ 00:04:33.431 00:04:33.431 real 0m1.362s 00:04:33.431 user 0m1.198s 00:04:33.431 sys 0m0.053s 00:04:33.431 11:28:36 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.431 11:28:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:33.431 11:28:36 event -- common/autotest_common.sh@1142 -- # return 0 00:04:33.431 11:28:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:33.431 11:28:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:33.431 11:28:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.431 11:28:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.431 ************************************ 00:04:33.431 START TEST event_reactor_perf 00:04:33.431 ************************************ 00:04:33.431 11:28:36 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:33.431 [2024-07-12 11:28:36.588031] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:33.431 [2024-07-12 11:28:36.588131] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 00:04:33.431 [2024-07-12 11:28:36.719176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.431 [2024-07-12 11:28:36.849532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.802 test_start 00:04:34.802 test_end 00:04:34.802 Performance: 375606 events per second 00:04:34.802 00:04:34.802 real 0m1.365s 00:04:34.802 user 0m1.201s 00:04:34.802 sys 0m0.054s 00:04:34.802 11:28:37 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.802 11:28:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:34.802 ************************************ 00:04:34.802 END TEST event_reactor_perf 00:04:34.802 ************************************ 00:04:34.802 11:28:37 event -- common/autotest_common.sh@1142 -- # return 0 00:04:34.802 11:28:37 event -- event/event.sh@49 -- # uname -s 00:04:34.802 11:28:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:34.802 11:28:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:34.802 11:28:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.802 11:28:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.802 11:28:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.802 ************************************ 00:04:34.802 START TEST event_scheduler 00:04:34.802 ************************************ 00:04:34.802 11:28:37 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:34.802 * Looking for test storage... 00:04:34.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:34.802 11:28:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:34.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.802 11:28:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60036 00:04:34.802 11:28:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:34.802 11:28:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.802 11:28:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60036 00:04:34.802 11:28:38 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60036 ']' 00:04:34.802 11:28:38 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.802 11:28:38 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.802 11:28:38 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.802 11:28:38 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.802 11:28:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.802 [2024-07-12 11:28:38.134709] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:34.802 [2024-07-12 11:28:38.134809] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60036 ] 00:04:35.060 [2024-07-12 11:28:38.270136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:35.060 [2024-07-12 11:28:38.413665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.060 [2024-07-12 11:28:38.413742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.060 [2024-07-12 11:28:38.413837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:35.060 [2024-07-12 11:28:38.413841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:35.994 11:28:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:35.994 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:35.994 POWER: Cannot set governor of lcore 0 to userspace 00:04:35.994 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:35.994 POWER: Cannot set governor of lcore 0 to performance 00:04:35.994 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:35.994 POWER: Cannot set governor of lcore 0 to userspace 00:04:35.994 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:35.994 POWER: Cannot set governor of lcore 0 to userspace 00:04:35.994 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:35.994 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:35.994 POWER: Unable to set Power Management Environment for lcore 0 00:04:35.994 [2024-07-12 11:28:39.167097] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:35.994 [2024-07-12 11:28:39.167325] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:35.994 [2024-07-12 11:28:39.167568] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:35.994 [2024-07-12 11:28:39.167784] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:35.994 [2024-07-12 11:28:39.167998] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:35.994 [2024-07-12 11:28:39.168015] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.994 11:28:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:35.994 [2024-07-12 11:28:39.227545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:35.994 [2024-07-12 11:28:39.261051] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.994 11:28:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.994 11:28:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:35.994 ************************************ 00:04:35.994 START TEST scheduler_create_thread 00:04:35.994 ************************************ 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.994 2 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.994 3 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.994 4 00:04:35.994 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.995 5 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.995 6 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.995 7 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.995 8 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.995 9 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.995 10 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.995 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.561 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.561 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:36.561 11:28:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:36.562 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.562 11:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 ************************************ 00:04:37.937 END TEST scheduler_create_thread 00:04:37.937 ************************************ 00:04:37.937 11:28:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.937 00:04:37.937 real 0m1.755s 00:04:37.937 user 0m0.016s 00:04:37.937 sys 0m0.010s 00:04:37.937 11:28:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.937 11:28:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:37.937 11:28:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:37.937 11:28:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60036 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60036 ']' 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60036 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60036 00:04:37.937 killing process with pid 60036 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60036' 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60036 00:04:37.937 11:28:41 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60036 00:04:38.195 [2024-07-12 11:28:41.504331] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:38.453 00:04:38.453 real 0m3.738s 00:04:38.453 user 0m6.780s 00:04:38.453 sys 0m0.361s 00:04:38.453 ************************************ 00:04:38.453 11:28:41 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.453 11:28:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.453 END TEST event_scheduler 00:04:38.453 ************************************ 00:04:38.453 11:28:41 event -- common/autotest_common.sh@1142 -- # return 0 00:04:38.453 11:28:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:38.453 11:28:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:38.453 11:28:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.453 11:28:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.453 11:28:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.453 ************************************ 00:04:38.453 START TEST app_repeat 00:04:38.453 ************************************ 00:04:38.453 11:28:41 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:38.453 11:28:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.453 11:28:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.453 11:28:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:38.453 11:28:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.453 11:28:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:38.453 11:28:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:38.454 11:28:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:38.454 11:28:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60124 00:04:38.454 11:28:41 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:38.454 11:28:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.454 Process app_repeat pid: 60124 00:04:38.454 11:28:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60124' 00:04:38.454 spdk_app_start Round 0 00:04:38.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.454 11:28:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.454 11:28:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:38.454 11:28:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60124 /var/tmp/spdk-nbd.sock 00:04:38.454 11:28:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60124 ']' 00:04:38.454 11:28:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.454 11:28:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.454 11:28:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.454 11:28:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.454 11:28:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.454 [2024-07-12 11:28:41.817095] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:38.454 [2024-07-12 11:28:41.817221] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60124 ] 00:04:38.712 [2024-07-12 11:28:41.963149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.712 [2024-07-12 11:28:42.124673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.712 [2024-07-12 11:28:42.124691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.969 [2024-07-12 11:28:42.203740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.535 11:28:42 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.535 11:28:42 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:39.535 11:28:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.792 Malloc0 00:04:39.792 11:28:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.048 Malloc1 00:04:40.048 11:28:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.048 11:28:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.048 11:28:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.048 11:28:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.048 11:28:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.048 11:28:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.048 11:28:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.048 11:28:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.048 11:28:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.049 11:28:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.049 11:28:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.049 11:28:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.049 11:28:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.049 11:28:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.049 11:28:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.049 11:28:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.306 /dev/nbd0 00:04:40.306 11:28:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.306 11:28:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.306 1+0 records in 00:04:40.306 1+0 records out 00:04:40.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047723 s, 8.6 MB/s 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:40.306 11:28:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:40.306 11:28:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.306 11:28:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.306 11:28:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.565 /dev/nbd1 00:04:40.565 11:28:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.565 11:28:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.565 1+0 records in 00:04:40.565 1+0 records out 00:04:40.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319294 s, 12.8 MB/s 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:40.565 11:28:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:40.565 11:28:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.565 11:28:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.565 11:28:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.565 11:28:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.565 11:28:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.824 { 00:04:40.824 "nbd_device": "/dev/nbd0", 00:04:40.824 "bdev_name": "Malloc0" 00:04:40.824 }, 00:04:40.824 { 00:04:40.824 "nbd_device": "/dev/nbd1", 00:04:40.824 "bdev_name": "Malloc1" 00:04:40.824 } 00:04:40.824 ]' 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.824 { 00:04:40.824 "nbd_device": "/dev/nbd0", 00:04:40.824 "bdev_name": "Malloc0" 00:04:40.824 }, 00:04:40.824 { 00:04:40.824 "nbd_device": "/dev/nbd1", 00:04:40.824 "bdev_name": "Malloc1" 00:04:40.824 } 00:04:40.824 ]' 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.824 /dev/nbd1' 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.824 /dev/nbd1' 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.824 256+0 records in 00:04:40.824 256+0 records out 00:04:40.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0064465 s, 163 MB/s 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.824 256+0 records in 00:04:40.824 256+0 records out 00:04:40.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208755 s, 50.2 MB/s 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.824 11:28:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.082 256+0 records in 00:04:41.082 256+0 records out 00:04:41.082 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296708 s, 35.3 MB/s 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.082 11:28:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.341 11:28:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.599 11:28:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.857 11:28:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.857 11:28:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.115 11:28:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.371 [2024-07-12 11:28:45.661095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.371 [2024-07-12 11:28:45.780237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.371 [2024-07-12 11:28:45.780249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.628 [2024-07-12 11:28:45.834335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.628 [2024-07-12 11:28:45.834428] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.628 [2024-07-12 11:28:45.834443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.156 spdk_app_start Round 1 00:04:45.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.156 11:28:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.156 11:28:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:45.157 11:28:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60124 /var/tmp/spdk-nbd.sock 00:04:45.157 11:28:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60124 ']' 00:04:45.157 11:28:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.157 11:28:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.157 11:28:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.157 11:28:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.157 11:28:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.415 11:28:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.415 11:28:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:45.415 11:28:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.673 Malloc0 00:04:45.673 11:28:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.932 Malloc1 00:04:46.189 11:28:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.189 /dev/nbd0 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.189 1+0 records in 00:04:46.189 1+0 records out 00:04:46.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033514 s, 12.2 MB/s 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:46.189 11:28:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.189 11:28:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.447 /dev/nbd1 00:04:46.447 11:28:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.447 11:28:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.447 1+0 records in 00:04:46.447 1+0 records out 00:04:46.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036544 s, 11.2 MB/s 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:46.447 11:28:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:46.447 11:28:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.447 11:28:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.706 11:28:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.706 11:28:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.706 11:28:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.706 11:28:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.706 { 00:04:46.706 "nbd_device": "/dev/nbd0", 00:04:46.706 "bdev_name": "Malloc0" 00:04:46.706 }, 00:04:46.706 { 00:04:46.706 "nbd_device": "/dev/nbd1", 00:04:46.706 "bdev_name": "Malloc1" 00:04:46.706 } 00:04:46.706 ]' 00:04:46.706 11:28:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.706 { 00:04:46.707 "nbd_device": "/dev/nbd0", 00:04:46.707 "bdev_name": "Malloc0" 00:04:46.707 }, 00:04:46.707 { 00:04:46.707 "nbd_device": "/dev/nbd1", 00:04:46.707 "bdev_name": "Malloc1" 00:04:46.707 } 00:04:46.707 ]' 00:04:46.707 11:28:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.964 /dev/nbd1' 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.964 /dev/nbd1' 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.964 11:28:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.965 256+0 records in 00:04:46.965 256+0 records out 00:04:46.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00736843 s, 142 MB/s 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.965 256+0 records in 00:04:46.965 256+0 records out 00:04:46.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312146 s, 33.6 MB/s 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.965 256+0 records in 00:04:46.965 256+0 records out 00:04:46.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314126 s, 33.4 MB/s 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.965 11:28:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.223 11:28:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.480 11:28:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.481 11:28:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.739 11:28:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.739 11:28:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.739 11:28:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.996 11:28:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.996 11:28:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.996 11:28:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.996 11:28:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.996 11:28:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.996 11:28:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.996 11:28:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.996 11:28:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.996 11:28:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.996 11:28:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.253 11:28:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.511 [2024-07-12 11:28:51.733012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.511 [2024-07-12 11:28:51.850070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.511 [2024-07-12 11:28:51.850081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.511 [2024-07-12 11:28:51.905143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:48.511 [2024-07-12 11:28:51.905226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.511 [2024-07-12 11:28:51.905241] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:51.822 spdk_app_start Round 2 00:04:51.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.822 11:28:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.822 11:28:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:51.822 11:28:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60124 /var/tmp/spdk-nbd.sock 00:04:51.822 11:28:54 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60124 ']' 00:04:51.822 11:28:54 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.822 11:28:54 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.822 11:28:54 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.822 11:28:54 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.822 11:28:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.822 11:28:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.822 11:28:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:51.822 11:28:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.822 Malloc0 00:04:51.822 11:28:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.080 Malloc1 00:04:52.080 11:28:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.080 11:28:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.338 /dev/nbd0 00:04:52.338 11:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.338 11:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.338 1+0 records in 00:04:52.338 1+0 records out 00:04:52.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269976 s, 15.2 MB/s 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:52.338 11:28:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:52.338 11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.338 11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.338 11:28:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.596 /dev/nbd1 00:04:52.596 11:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.596 11:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.596 1+0 records in 00:04:52.596 1+0 records out 00:04:52.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292305 s, 14.0 MB/s 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:52.596 11:28:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:52.596 11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.596 11:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.596 11:28:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.596 11:28:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.596 11:28:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.854 { 00:04:52.854 "nbd_device": "/dev/nbd0", 00:04:52.854 "bdev_name": "Malloc0" 00:04:52.854 }, 00:04:52.854 { 00:04:52.854 "nbd_device": "/dev/nbd1", 00:04:52.854 "bdev_name": "Malloc1" 00:04:52.854 } 00:04:52.854 ]' 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.854 { 00:04:52.854 "nbd_device": "/dev/nbd0", 00:04:52.854 "bdev_name": "Malloc0" 00:04:52.854 }, 00:04:52.854 { 00:04:52.854 "nbd_device": "/dev/nbd1", 00:04:52.854 "bdev_name": "Malloc1" 00:04:52.854 } 00:04:52.854 ]' 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.854 /dev/nbd1' 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.854 /dev/nbd1' 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.854 11:28:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.855 11:28:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.855 11:28:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.855 11:28:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.855 256+0 records in 00:04:52.855 256+0 records out 00:04:52.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00750275 s, 140 MB/s 00:04:52.855 11:28:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.855 11:28:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.113 256+0 records in 00:04:53.113 256+0 records out 00:04:53.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271435 s, 38.6 MB/s 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.113 256+0 records in 00:04:53.113 256+0 records out 00:04:53.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282479 s, 37.1 MB/s 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.113 11:28:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.371 11:28:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.629 11:28:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.886 11:28:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.886 11:28:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.143 11:28:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.401 [2024-07-12 11:28:57.687741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.401 [2024-07-12 11:28:57.802203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.401 [2024-07-12 11:28:57.802214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.658 [2024-07-12 11:28:57.855525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.658 [2024-07-12 11:28:57.855617] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.658 [2024-07-12 11:28:57.855633] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.243 11:29:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60124 /var/tmp/spdk-nbd.sock 00:04:57.243 11:29:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60124 ']' 00:04:57.243 11:29:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.243 11:29:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.243 11:29:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.243 11:29:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.243 11:29:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:57.501 11:29:00 event.app_repeat -- event/event.sh@39 -- # killprocess 60124 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60124 ']' 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60124 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60124 00:04:57.501 killing process with pid 60124 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60124' 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60124 00:04:57.501 11:29:00 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60124 00:04:57.760 spdk_app_start is called in Round 0. 00:04:57.760 Shutdown signal received, stop current app iteration 00:04:57.760 Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 reinitialization... 00:04:57.760 spdk_app_start is called in Round 1. 00:04:57.760 Shutdown signal received, stop current app iteration 00:04:57.760 Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 reinitialization... 00:04:57.761 spdk_app_start is called in Round 2. 00:04:57.761 Shutdown signal received, stop current app iteration 00:04:57.761 Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 reinitialization... 00:04:57.761 spdk_app_start is called in Round 3. 00:04:57.761 Shutdown signal received, stop current app iteration 00:04:57.761 11:29:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:57.761 11:29:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:57.761 00:04:57.761 real 0m19.227s 00:04:57.761 user 0m43.045s 00:04:57.761 sys 0m2.931s 00:04:57.761 11:29:01 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.761 ************************************ 00:04:57.761 END TEST app_repeat 00:04:57.761 ************************************ 00:04:57.761 11:29:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.761 11:29:01 event -- common/autotest_common.sh@1142 -- # return 0 00:04:57.761 11:29:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:57.761 11:29:01 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:57.761 11:29:01 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.761 11:29:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.761 11:29:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.761 ************************************ 00:04:57.761 START TEST cpu_locks 00:04:57.761 ************************************ 00:04:57.761 11:29:01 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:57.761 * Looking for test storage... 00:04:57.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.761 11:29:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:57.761 11:29:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:57.761 11:29:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:57.761 11:29:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:57.761 11:29:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.761 11:29:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.761 11:29:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.761 ************************************ 00:04:57.761 START TEST default_locks 00:04:57.761 ************************************ 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60557 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60557 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60557 ']' 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.761 11:29:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.020 [2024-07-12 11:29:01.211942] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:58.020 [2024-07-12 11:29:01.212045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60557 ] 00:04:58.020 [2024-07-12 11:29:01.350441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.278 [2024-07-12 11:29:01.484623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.278 [2024-07-12 11:29:01.543097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.846 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.846 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:58.846 11:29:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60557 00:04:58.846 11:29:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60557 00:04:58.846 11:29:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60557 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60557 ']' 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60557 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60557 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.413 killing process with pid 60557 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60557' 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60557 00:04:59.413 11:29:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60557 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60557 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60557 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60557 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60557 ']' 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.672 ERROR: process (pid: 60557) is no longer running 00:04:59.672 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60557) - No such process 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.672 00:04:59.672 real 0m1.967s 00:04:59.672 user 0m2.063s 00:04:59.672 sys 0m0.640s 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.672 ************************************ 00:04:59.672 END TEST default_locks 00:04:59.672 ************************************ 00:04:59.672 11:29:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.931 11:29:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:59.931 11:29:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:59.931 11:29:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.931 11:29:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.931 11:29:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.931 ************************************ 00:04:59.931 START TEST default_locks_via_rpc 00:04:59.931 ************************************ 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60609 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60609 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60609 ']' 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.931 11:29:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.931 [2024-07-12 11:29:03.228826] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:04:59.931 [2024-07-12 11:29:03.228946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60609 ] 00:04:59.931 [2024-07-12 11:29:03.361545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.195 [2024-07-12 11:29:03.485105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.195 [2024-07-12 11:29:03.541652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60609 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60609 00:05:01.149 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60609 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60609 ']' 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60609 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60609 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.408 killing process with pid 60609 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60609' 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60609 00:05:01.408 11:29:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60609 00:05:01.668 00:05:01.668 real 0m1.882s 00:05:01.668 user 0m2.049s 00:05:01.668 sys 0m0.542s 00:05:01.668 11:29:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.668 11:29:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.668 ************************************ 00:05:01.668 END TEST default_locks_via_rpc 00:05:01.668 ************************************ 00:05:01.668 11:29:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:01.668 11:29:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:01.668 11:29:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.668 11:29:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.668 11:29:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.668 ************************************ 00:05:01.668 START TEST non_locking_app_on_locked_coremask 00:05:01.668 ************************************ 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60660 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60660 /var/tmp/spdk.sock 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60660 ']' 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.668 11:29:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.927 [2024-07-12 11:29:05.178797] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:01.927 [2024-07-12 11:29:05.178919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60660 ] 00:05:01.927 [2024-07-12 11:29:05.314738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.185 [2024-07-12 11:29:05.441274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.185 [2024-07-12 11:29:05.498620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.751 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.751 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:03.010 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:03.010 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60676 00:05:03.010 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60676 /var/tmp/spdk2.sock 00:05:03.010 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60676 ']' 00:05:03.010 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.010 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.010 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.010 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.011 11:29:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.011 [2024-07-12 11:29:06.259200] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:03.011 [2024-07-12 11:29:06.259306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60676 ] 00:05:03.011 [2024-07-12 11:29:06.404719] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.011 [2024-07-12 11:29:06.404799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.269 [2024-07-12 11:29:06.648167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.527 [2024-07-12 11:29:06.761485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.786 11:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.786 11:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:03.786 11:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60660 00:05:03.786 11:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.786 11:29:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60660 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60660 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60660 ']' 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60660 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60660 00:05:04.721 killing process with pid 60660 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60660' 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60660 00:05:04.721 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60660 00:05:05.656 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60676 00:05:05.656 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60676 ']' 00:05:05.656 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60676 00:05:05.657 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:05.657 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.657 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60676 00:05:05.657 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.657 killing process with pid 60676 00:05:05.657 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.657 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60676' 00:05:05.657 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60676 00:05:05.657 11:29:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60676 00:05:05.914 ************************************ 00:05:05.914 END TEST non_locking_app_on_locked_coremask 00:05:05.914 ************************************ 00:05:05.914 00:05:05.914 real 0m4.217s 00:05:05.914 user 0m4.693s 00:05:05.914 sys 0m1.134s 00:05:05.914 11:29:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.914 11:29:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.174 11:29:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:06.174 11:29:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:06.174 11:29:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.174 11:29:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.174 11:29:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.174 ************************************ 00:05:06.174 START TEST locking_app_on_unlocked_coremask 00:05:06.174 ************************************ 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60743 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60743 /var/tmp/spdk.sock 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60743 ']' 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:06.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.174 11:29:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.174 [2024-07-12 11:29:09.448028] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:06.174 [2024-07-12 11:29:09.448131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60743 ] 00:05:06.174 [2024-07-12 11:29:09.588244] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.174 [2024-07-12 11:29:09.588324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.434 [2024-07-12 11:29:09.708508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.434 [2024-07-12 11:29:09.765808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60765 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60765 /var/tmp/spdk2.sock 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60765 ']' 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.376 11:29:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.376 [2024-07-12 11:29:10.508792] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:07.376 [2024-07-12 11:29:10.509196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60765 ] 00:05:07.376 [2024-07-12 11:29:10.647509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.635 [2024-07-12 11:29:10.908445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.635 [2024-07-12 11:29:11.035734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:08.203 11:29:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.203 11:29:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:08.203 11:29:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60765 00:05:08.203 11:29:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60765 00:05:08.203 11:29:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60743 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60743 ']' 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60743 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60743 00:05:09.140 killing process with pid 60743 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60743' 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60743 00:05:09.140 11:29:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60743 00:05:09.708 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60765 00:05:09.708 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60765 ']' 00:05:09.708 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60765 00:05:09.708 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.708 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.708 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60765 00:05:09.967 killing process with pid 60765 00:05:09.967 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.967 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.967 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60765' 00:05:09.967 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60765 00:05:09.967 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60765 00:05:10.227 ************************************ 00:05:10.227 END TEST locking_app_on_unlocked_coremask 00:05:10.227 ************************************ 00:05:10.227 00:05:10.227 real 0m4.194s 00:05:10.227 user 0m4.640s 00:05:10.227 sys 0m1.141s 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.227 11:29:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:10.227 11:29:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:10.227 11:29:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.227 11:29:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.227 11:29:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.227 ************************************ 00:05:10.227 START TEST locking_app_on_locked_coremask 00:05:10.227 ************************************ 00:05:10.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60832 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60832 /var/tmp/spdk.sock 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60832 ']' 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.227 11:29:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.486 [2024-07-12 11:29:13.697335] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:10.486 [2024-07-12 11:29:13.697775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60832 ] 00:05:10.486 [2024-07-12 11:29:13.837331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.745 [2024-07-12 11:29:13.958009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.745 [2024-07-12 11:29:14.014444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60848 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60848 /var/tmp/spdk2.sock 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60848 /var/tmp/spdk2.sock 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60848 /var/tmp/spdk2.sock 00:05:11.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60848 ']' 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.311 11:29:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 [2024-07-12 11:29:14.730661] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:11.312 [2024-07-12 11:29:14.730742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60848 ] 00:05:11.570 [2024-07-12 11:29:14.873236] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60832 has claimed it. 00:05:11.570 [2024-07-12 11:29:14.873311] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:12.137 ERROR: process (pid: 60848) is no longer running 00:05:12.137 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60848) - No such process 00:05:12.137 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.137 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:12.137 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:12.137 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.137 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.137 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.137 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60832 00:05:12.137 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60832 00:05:12.137 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.704 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60832 00:05:12.704 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60832 ']' 00:05:12.704 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60832 00:05:12.704 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.704 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.704 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60832 00:05:12.704 killing process with pid 60832 00:05:12.704 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.704 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.705 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60832' 00:05:12.705 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60832 00:05:12.705 11:29:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60832 00:05:12.962 ************************************ 00:05:12.962 END TEST locking_app_on_locked_coremask 00:05:12.962 ************************************ 00:05:12.962 00:05:12.962 real 0m2.713s 00:05:12.962 user 0m3.123s 00:05:12.962 sys 0m0.653s 00:05:12.962 11:29:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.962 11:29:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.962 11:29:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:12.962 11:29:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:12.962 11:29:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.962 11:29:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.962 11:29:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.962 ************************************ 00:05:12.962 START TEST locking_overlapped_coremask 00:05:12.962 ************************************ 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:12.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60893 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60893 /var/tmp/spdk.sock 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60893 ']' 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.962 11:29:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.219 [2024-07-12 11:29:16.469350] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:13.219 [2024-07-12 11:29:16.469726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60893 ] 00:05:13.219 [2024-07-12 11:29:16.616014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.477 [2024-07-12 11:29:16.739168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.477 [2024-07-12 11:29:16.739244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.477 [2024-07-12 11:29:16.739251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.477 [2024-07-12 11:29:16.796777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60911 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60911 /var/tmp/spdk2.sock 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60911 /var/tmp/spdk2.sock 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60911 /var/tmp/spdk2.sock 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60911 ']' 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.412 11:29:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 [2024-07-12 11:29:17.570886] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:14.412 [2024-07-12 11:29:17.571228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60911 ] 00:05:14.412 [2024-07-12 11:29:17.718862] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60893 has claimed it. 00:05:14.412 [2024-07-12 11:29:17.718935] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:14.979 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60911) - No such process 00:05:14.979 ERROR: process (pid: 60911) is no longer running 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60893 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60893 ']' 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60893 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60893 00:05:14.979 killing process with pid 60893 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60893' 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60893 00:05:14.979 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60893 00:05:15.546 ************************************ 00:05:15.546 END TEST locking_overlapped_coremask 00:05:15.546 ************************************ 00:05:15.546 00:05:15.546 real 0m2.323s 00:05:15.546 user 0m6.403s 00:05:15.546 sys 0m0.479s 00:05:15.546 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.546 11:29:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.546 11:29:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:15.546 11:29:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:15.546 11:29:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.546 11:29:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.546 11:29:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.546 ************************************ 00:05:15.547 START TEST locking_overlapped_coremask_via_rpc 00:05:15.547 ************************************ 00:05:15.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60957 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60957 /var/tmp/spdk.sock 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60957 ']' 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.547 11:29:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.547 [2024-07-12 11:29:18.844072] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:15.547 [2024-07-12 11:29:18.844183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60957 ] 00:05:15.547 [2024-07-12 11:29:18.986802] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.547 [2024-07-12 11:29:18.986858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.804 [2024-07-12 11:29:19.118715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.804 [2024-07-12 11:29:19.118805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.804 [2024-07-12 11:29:19.118812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.804 [2024-07-12 11:29:19.185182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60975 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60975 /var/tmp/spdk2.sock 00:05:16.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60975 ']' 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.368 11:29:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.625 [2024-07-12 11:29:19.838600] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:16.625 [2024-07-12 11:29:19.838936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60975 ] 00:05:16.625 [2024-07-12 11:29:19.979759] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.625 [2024-07-12 11:29:19.979834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.926 [2024-07-12 11:29:20.223334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.926 [2024-07-12 11:29:20.227688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:16.926 [2024-07-12 11:29:20.227691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.926 [2024-07-12 11:29:20.337418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.493 [2024-07-12 11:29:20.800726] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60957 has claimed it. 00:05:17.493 request: 00:05:17.493 { 00:05:17.493 "method": "framework_enable_cpumask_locks", 00:05:17.493 "req_id": 1 00:05:17.493 } 00:05:17.493 Got JSON-RPC error response 00:05:17.493 response: 00:05:17.493 { 00:05:17.493 "code": -32603, 00:05:17.493 "message": "Failed to claim CPU core: 2" 00:05:17.493 } 00:05:17.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60957 /var/tmp/spdk.sock 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60957 ']' 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.493 11:29:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.751 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.751 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:17.751 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60975 /var/tmp/spdk2.sock 00:05:17.751 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60975 ']' 00:05:17.751 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.751 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.751 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.751 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.751 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.009 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.009 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:18.009 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:18.009 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.009 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.009 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.009 00:05:18.009 real 0m2.561s 00:05:18.009 user 0m1.284s 00:05:18.009 sys 0m0.203s 00:05:18.009 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.009 11:29:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.009 ************************************ 00:05:18.009 END TEST locking_overlapped_coremask_via_rpc 00:05:18.009 ************************************ 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:18.009 11:29:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:18.009 11:29:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60957 ]] 00:05:18.009 11:29:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60957 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60957 ']' 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60957 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60957 00:05:18.009 killing process with pid 60957 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60957' 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60957 00:05:18.009 11:29:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60957 00:05:18.575 11:29:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60975 ]] 00:05:18.575 11:29:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60975 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60975 ']' 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60975 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60975 00:05:18.575 killing process with pid 60975 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60975' 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60975 00:05:18.575 11:29:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60975 00:05:18.833 11:29:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.833 11:29:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:18.833 11:29:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60957 ]] 00:05:18.833 11:29:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60957 00:05:18.833 11:29:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60957 ']' 00:05:18.833 11:29:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60957 00:05:18.833 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60957) - No such process 00:05:18.833 Process with pid 60957 is not found 00:05:18.833 Process with pid 60975 is not found 00:05:18.833 11:29:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60957 is not found' 00:05:18.833 11:29:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60975 ]] 00:05:18.833 11:29:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60975 00:05:18.833 11:29:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60975 ']' 00:05:18.833 11:29:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60975 00:05:18.833 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60975) - No such process 00:05:18.833 11:29:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60975 is not found' 00:05:18.833 11:29:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.833 00:05:18.833 real 0m21.181s 00:05:18.833 user 0m36.179s 00:05:18.833 sys 0m5.678s 00:05:18.833 11:29:22 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.833 11:29:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.833 ************************************ 00:05:18.833 END TEST cpu_locks 00:05:18.833 ************************************ 00:05:18.833 11:29:22 event -- common/autotest_common.sh@1142 -- # return 0 00:05:19.091 00:05:19.092 real 0m48.641s 00:05:19.092 user 1m32.716s 00:05:19.092 sys 0m9.379s 00:05:19.092 11:29:22 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.092 ************************************ 00:05:19.092 END TEST event 00:05:19.092 ************************************ 00:05:19.092 11:29:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.092 11:29:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:19.092 11:29:22 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:19.092 11:29:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.092 11:29:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.092 11:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:19.092 ************************************ 00:05:19.092 START TEST thread 00:05:19.092 ************************************ 00:05:19.092 11:29:22 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:19.092 * Looking for test storage... 00:05:19.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:19.092 11:29:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.092 11:29:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:19.092 11:29:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.092 11:29:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.092 ************************************ 00:05:19.092 START TEST thread_poller_perf 00:05:19.092 ************************************ 00:05:19.092 11:29:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.092 [2024-07-12 11:29:22.438430] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:19.092 [2024-07-12 11:29:22.438672] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61097 ] 00:05:19.350 [2024-07-12 11:29:22.574816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.350 [2024-07-12 11:29:22.708511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.350 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:20.725 ====================================== 00:05:20.725 busy:2213988328 (cyc) 00:05:20.725 total_run_count: 307000 00:05:20.725 tsc_hz: 2200000000 (cyc) 00:05:20.725 ====================================== 00:05:20.725 poller_cost: 7211 (cyc), 3277 (nsec) 00:05:20.725 00:05:20.725 real 0m1.390s 00:05:20.725 user 0m1.222s 00:05:20.725 sys 0m0.060s 00:05:20.725 11:29:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.725 11:29:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.725 ************************************ 00:05:20.725 END TEST thread_poller_perf 00:05:20.725 ************************************ 00:05:20.725 11:29:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:20.725 11:29:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.725 11:29:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:20.725 11:29:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.725 11:29:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.725 ************************************ 00:05:20.725 START TEST thread_poller_perf 00:05:20.725 ************************************ 00:05:20.725 11:29:23 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.725 [2024-07-12 11:29:23.878684] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:20.725 [2024-07-12 11:29:23.878926] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61133 ] 00:05:20.725 [2024-07-12 11:29:24.011310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.725 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.725 [2024-07-12 11:29:24.145775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.102 ====================================== 00:05:22.102 busy:2202876277 (cyc) 00:05:22.102 total_run_count: 3966000 00:05:22.102 tsc_hz: 2200000000 (cyc) 00:05:22.102 ====================================== 00:05:22.102 poller_cost: 555 (cyc), 252 (nsec) 00:05:22.102 00:05:22.102 real 0m1.377s 00:05:22.102 user 0m1.204s 00:05:22.102 sys 0m0.064s 00:05:22.102 11:29:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.102 ************************************ 00:05:22.102 END TEST thread_poller_perf 00:05:22.102 ************************************ 00:05:22.102 11:29:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.102 11:29:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:22.102 11:29:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:22.102 ************************************ 00:05:22.102 END TEST thread 00:05:22.102 ************************************ 00:05:22.102 00:05:22.102 real 0m2.952s 00:05:22.102 user 0m2.491s 00:05:22.102 sys 0m0.238s 00:05:22.102 11:29:25 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.102 11:29:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.102 11:29:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.102 11:29:25 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:22.102 11:29:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.102 11:29:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.102 11:29:25 -- common/autotest_common.sh@10 -- # set +x 00:05:22.102 ************************************ 00:05:22.102 START TEST accel 00:05:22.102 ************************************ 00:05:22.102 11:29:25 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:22.102 * Looking for test storage... 00:05:22.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:22.102 11:29:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:22.102 11:29:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:22.102 11:29:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.102 11:29:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61202 00:05:22.102 11:29:25 accel -- accel/accel.sh@63 -- # waitforlisten 61202 00:05:22.102 11:29:25 accel -- common/autotest_common.sh@829 -- # '[' -z 61202 ']' 00:05:22.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.102 11:29:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.102 11:29:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.102 11:29:25 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:22.102 11:29:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.102 11:29:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.102 11:29:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:22.102 11:29:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.102 11:29:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.102 11:29:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.102 11:29:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.102 11:29:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.102 11:29:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.102 11:29:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:22.102 11:29:25 accel -- accel/accel.sh@41 -- # jq -r . 00:05:22.102 [2024-07-12 11:29:25.504732] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:22.102 [2024-07-12 11:29:25.504890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61202 ] 00:05:22.368 [2024-07-12 11:29:25.643983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.368 [2024-07-12 11:29:25.764354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.643 [2024-07-12 11:29:25.821569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:23.210 11:29:26 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.210 11:29:26 accel -- common/autotest_common.sh@862 -- # return 0 00:05:23.210 11:29:26 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:23.210 11:29:26 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:23.210 11:29:26 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:23.210 11:29:26 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:23.210 11:29:26 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:23.210 11:29:26 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:23.210 11:29:26 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:23.210 11:29:26 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.210 11:29:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.210 11:29:26 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.210 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.210 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.210 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.211 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.211 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.211 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.211 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.211 11:29:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.211 11:29:26 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.211 11:29:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.211 11:29:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.211 11:29:26 accel -- accel/accel.sh@75 -- # killprocess 61202 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@948 -- # '[' -z 61202 ']' 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@952 -- # kill -0 61202 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@953 -- # uname 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61202 00:05:23.211 killing process with pid 61202 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61202' 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@967 -- # kill 61202 00:05:23.211 11:29:26 accel -- common/autotest_common.sh@972 -- # wait 61202 00:05:23.777 11:29:26 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:23.777 11:29:26 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:23.777 11:29:26 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:23.777 11:29:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.777 11:29:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.777 11:29:26 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:23.777 11:29:26 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:23.777 11:29:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:23.777 11:29:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.777 11:29:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.777 11:29:26 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.777 11:29:26 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.777 11:29:26 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.777 11:29:26 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:23.777 11:29:26 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:23.777 11:29:26 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.777 11:29:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:23.777 11:29:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.777 11:29:27 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:23.777 11:29:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:23.777 11:29:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.777 11:29:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.777 ************************************ 00:05:23.777 START TEST accel_missing_filename 00:05:23.777 ************************************ 00:05:23.777 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:23.777 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:23.777 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:23.777 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:23.777 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.778 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:23.778 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.778 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:23.778 11:29:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:23.778 11:29:27 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:23.778 11:29:27 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.778 11:29:27 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.778 11:29:27 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.778 11:29:27 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.778 11:29:27 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.778 11:29:27 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:23.778 11:29:27 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:23.778 [2024-07-12 11:29:27.068860] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:23.778 [2024-07-12 11:29:27.068981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61259 ] 00:05:23.778 [2024-07-12 11:29:27.204146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.036 [2024-07-12 11:29:27.328242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.036 [2024-07-12 11:29:27.387768] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.036 [2024-07-12 11:29:27.472209] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:24.294 A filename is required. 00:05:24.294 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:24.294 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.294 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:24.294 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:24.294 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:24.294 ************************************ 00:05:24.294 END TEST accel_missing_filename 00:05:24.294 ************************************ 00:05:24.294 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.294 00:05:24.294 real 0m0.525s 00:05:24.294 user 0m0.348s 00:05:24.294 sys 0m0.125s 00:05:24.294 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.294 11:29:27 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:24.294 11:29:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.294 11:29:27 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.294 11:29:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:24.294 11:29:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.294 11:29:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.294 ************************************ 00:05:24.294 START TEST accel_compress_verify 00:05:24.294 ************************************ 00:05:24.294 11:29:27 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.294 11:29:27 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:24.294 11:29:27 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.294 11:29:27 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:24.294 11:29:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.294 11:29:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:24.294 11:29:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.294 11:29:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.294 11:29:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.294 11:29:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:24.294 11:29:27 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.294 11:29:27 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.294 11:29:27 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.294 11:29:27 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.294 11:29:27 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.294 11:29:27 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:24.294 11:29:27 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:24.294 [2024-07-12 11:29:27.641121] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:24.294 [2024-07-12 11:29:27.641213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61283 ] 00:05:24.552 [2024-07-12 11:29:27.779720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.552 [2024-07-12 11:29:27.903809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.552 [2024-07-12 11:29:27.962849] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.811 [2024-07-12 11:29:28.041969] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:24.811 00:05:24.811 Compression does not support the verify option, aborting. 00:05:24.811 11:29:28 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:24.811 11:29:28 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.811 11:29:28 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:24.811 11:29:28 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:24.811 11:29:28 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:24.811 11:29:28 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.811 ************************************ 00:05:24.811 END TEST accel_compress_verify 00:05:24.811 ************************************ 00:05:24.811 00:05:24.811 real 0m0.520s 00:05:24.811 user 0m0.347s 00:05:24.811 sys 0m0.119s 00:05:24.811 11:29:28 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.811 11:29:28 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:24.811 11:29:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.811 11:29:28 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:24.811 11:29:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:24.811 11:29:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.811 11:29:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.811 ************************************ 00:05:24.811 START TEST accel_wrong_workload 00:05:24.811 ************************************ 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:24.811 11:29:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:24.811 11:29:28 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:24.811 11:29:28 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.811 11:29:28 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.811 11:29:28 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.811 11:29:28 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.811 11:29:28 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.811 11:29:28 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:24.811 11:29:28 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:24.811 Unsupported workload type: foobar 00:05:24.811 [2024-07-12 11:29:28.216668] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:24.811 accel_perf options: 00:05:24.811 [-h help message] 00:05:24.811 [-q queue depth per core] 00:05:24.811 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:24.811 [-T number of threads per core 00:05:24.811 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:24.811 [-t time in seconds] 00:05:24.811 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:24.811 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:24.811 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:24.811 [-l for compress/decompress workloads, name of uncompressed input file 00:05:24.811 [-S for crc32c workload, use this seed value (default 0) 00:05:24.811 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:24.811 [-f for fill workload, use this BYTE value (default 255) 00:05:24.811 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:24.811 [-y verify result if this switch is on] 00:05:24.811 [-a tasks to allocate per core (default: same value as -q)] 00:05:24.811 Can be used to spread operations across a wider range of memory. 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.811 ************************************ 00:05:24.811 END TEST accel_wrong_workload 00:05:24.811 ************************************ 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.811 00:05:24.811 real 0m0.038s 00:05:24.811 user 0m0.020s 00:05:24.811 sys 0m0.016s 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.811 11:29:28 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 11:29:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.070 11:29:28 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:25.070 11:29:28 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:25.070 11:29:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.070 11:29:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 ************************************ 00:05:25.070 START TEST accel_negative_buffers 00:05:25.070 ************************************ 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:25.070 11:29:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:25.070 11:29:28 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:25.070 11:29:28 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.070 11:29:28 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.070 11:29:28 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.070 11:29:28 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.070 11:29:28 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.070 11:29:28 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:25.070 11:29:28 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:25.070 -x option must be non-negative. 00:05:25.070 [2024-07-12 11:29:28.297397] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:25.070 accel_perf options: 00:05:25.070 [-h help message] 00:05:25.070 [-q queue depth per core] 00:05:25.070 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:25.070 [-T number of threads per core 00:05:25.070 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:25.070 [-t time in seconds] 00:05:25.070 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:25.070 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:25.070 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:25.070 [-l for compress/decompress workloads, name of uncompressed input file 00:05:25.070 [-S for crc32c workload, use this seed value (default 0) 00:05:25.070 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:25.070 [-f for fill workload, use this BYTE value (default 255) 00:05:25.070 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:25.070 [-y verify result if this switch is on] 00:05:25.070 [-a tasks to allocate per core (default: same value as -q)] 00:05:25.070 Can be used to spread operations across a wider range of memory. 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.070 ************************************ 00:05:25.070 END TEST accel_negative_buffers 00:05:25.070 ************************************ 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.070 00:05:25.070 real 0m0.032s 00:05:25.070 user 0m0.017s 00:05:25.070 sys 0m0.015s 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.070 11:29:28 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 11:29:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.070 11:29:28 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:25.070 11:29:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:25.070 11:29:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.070 11:29:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.070 ************************************ 00:05:25.070 START TEST accel_crc32c 00:05:25.070 ************************************ 00:05:25.070 11:29:28 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:25.070 11:29:28 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:25.070 [2024-07-12 11:29:28.382987] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:25.070 [2024-07-12 11:29:28.383115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61342 ] 00:05:25.328 [2024-07-12 11:29:28.520260] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.328 [2024-07-12 11:29:28.635692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.328 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.329 11:29:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.329 11:29:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.329 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.329 11:29:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.703 ************************************ 00:05:26.703 END TEST accel_crc32c 00:05:26.703 ************************************ 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.703 11:29:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.704 11:29:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:26.704 11:29:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.704 00:05:26.704 real 0m1.526s 00:05:26.704 user 0m1.315s 00:05:26.704 sys 0m0.117s 00:05:26.704 11:29:29 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.704 11:29:29 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:26.704 11:29:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.704 11:29:29 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:26.704 11:29:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:26.704 11:29:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.704 11:29:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.704 ************************************ 00:05:26.704 START TEST accel_crc32c_C2 00:05:26.704 ************************************ 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:26.704 11:29:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:26.704 [2024-07-12 11:29:29.959925] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:26.704 [2024-07-12 11:29:29.960057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61382 ] 00:05:26.704 [2024-07-12 11:29:30.108976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.962 [2024-07-12 11:29:30.244503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.962 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.963 11:29:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.337 00:05:28.337 real 0m1.550s 00:05:28.337 user 0m1.333s 00:05:28.337 sys 0m0.123s 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.337 11:29:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:28.337 ************************************ 00:05:28.337 END TEST accel_crc32c_C2 00:05:28.337 ************************************ 00:05:28.337 11:29:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.337 11:29:31 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:28.337 11:29:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:28.338 11:29:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.338 11:29:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.338 ************************************ 00:05:28.338 START TEST accel_copy 00:05:28.338 ************************************ 00:05:28.338 11:29:31 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:28.338 11:29:31 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:28.338 [2024-07-12 11:29:31.554392] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:28.338 [2024-07-12 11:29:31.554482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61411 ] 00:05:28.338 [2024-07-12 11:29:31.693725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.596 [2024-07-12 11:29:31.807004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.596 11:29:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.970 ************************************ 00:05:29.970 END TEST accel_copy 00:05:29.970 ************************************ 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:29.970 11:29:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.970 00:05:29.970 real 0m1.503s 00:05:29.970 user 0m0.014s 00:05:29.970 sys 0m0.003s 00:05:29.970 11:29:33 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.970 11:29:33 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:29.970 11:29:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.970 11:29:33 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.970 11:29:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:29.970 11:29:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.970 11:29:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.970 ************************************ 00:05:29.970 START TEST accel_fill 00:05:29.970 ************************************ 00:05:29.970 11:29:33 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:29.970 11:29:33 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:29.970 [2024-07-12 11:29:33.101556] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:29.970 [2024-07-12 11:29:33.101653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61451 ] 00:05:29.970 [2024-07-12 11:29:33.238737] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.970 [2024-07-12 11:29:33.365158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.228 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.229 11:29:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.163 ************************************ 00:05:31.163 END TEST accel_fill 00:05:31.163 ************************************ 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:31.163 11:29:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.163 00:05:31.163 real 0m1.526s 00:05:31.163 user 0m1.323s 00:05:31.163 sys 0m0.112s 00:05:31.163 11:29:34 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.163 11:29:34 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:31.421 11:29:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.421 11:29:34 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:31.421 11:29:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.421 11:29:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.421 11:29:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.421 ************************************ 00:05:31.421 START TEST accel_copy_crc32c 00:05:31.421 ************************************ 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:31.421 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:31.421 [2024-07-12 11:29:34.672417] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:31.421 [2024-07-12 11:29:34.672523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61480 ] 00:05:31.421 [2024-07-12 11:29:34.808029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.679 [2024-07-12 11:29:34.923760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.679 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.680 11:29:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.076 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.077 00:05:33.077 real 0m1.513s 00:05:33.077 user 0m1.303s 00:05:33.077 sys 0m0.117s 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.077 ************************************ 00:05:33.077 END TEST accel_copy_crc32c 00:05:33.077 ************************************ 00:05:33.077 11:29:36 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:33.077 11:29:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.077 11:29:36 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:33.077 11:29:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:33.077 11:29:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.077 11:29:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.077 ************************************ 00:05:33.077 START TEST accel_copy_crc32c_C2 00:05:33.077 ************************************ 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:33.077 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:33.077 [2024-07-12 11:29:36.235788] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:33.077 [2024-07-12 11:29:36.235883] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61520 ] 00:05:33.077 [2024-07-12 11:29:36.372686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.077 [2024-07-12 11:29:36.489522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.335 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.336 11:29:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.270 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.270 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.270 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.270 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.270 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.270 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.270 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.270 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.270 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.529 00:05:34.529 real 0m1.512s 00:05:34.529 user 0m1.307s 00:05:34.529 sys 0m0.107s 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.529 11:29:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:34.529 ************************************ 00:05:34.529 END TEST accel_copy_crc32c_C2 00:05:34.529 ************************************ 00:05:34.529 11:29:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.529 11:29:37 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:34.529 11:29:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.529 11:29:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.529 11:29:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.529 ************************************ 00:05:34.529 START TEST accel_dualcast 00:05:34.529 ************************************ 00:05:34.529 11:29:37 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:34.529 11:29:37 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:34.529 [2024-07-12 11:29:37.797405] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:34.529 [2024-07-12 11:29:37.797496] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61549 ] 00:05:34.529 [2024-07-12 11:29:37.928197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.787 [2024-07-12 11:29:38.062255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:34.787 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.788 11:29:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.165 ************************************ 00:05:36.165 END TEST accel_dualcast 00:05:36.165 ************************************ 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.165 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.166 11:29:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.166 11:29:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.166 11:29:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:36.166 11:29:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.166 00:05:36.166 real 0m1.526s 00:05:36.166 user 0m1.310s 00:05:36.166 sys 0m0.122s 00:05:36.166 11:29:39 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.166 11:29:39 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:36.166 11:29:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.166 11:29:39 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:36.166 11:29:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:36.166 11:29:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.166 11:29:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.166 ************************************ 00:05:36.166 START TEST accel_compare 00:05:36.166 ************************************ 00:05:36.166 11:29:39 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:36.166 11:29:39 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:36.166 [2024-07-12 11:29:39.367570] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:36.166 [2024-07-12 11:29:39.367671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61589 ] 00:05:36.166 [2024-07-12 11:29:39.499666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.425 [2024-07-12 11:29:39.618356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.425 11:29:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.802 11:29:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.803 11:29:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 ************************************ 00:05:37.803 END TEST accel_compare 00:05:37.803 ************************************ 00:05:37.803 11:29:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.803 11:29:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:37.803 11:29:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.803 00:05:37.803 real 0m1.498s 00:05:37.803 user 0m1.296s 00:05:37.803 sys 0m0.110s 00:05:37.803 11:29:40 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.803 11:29:40 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:37.803 11:29:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.803 11:29:40 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:37.803 11:29:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.803 11:29:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.803 11:29:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.803 ************************************ 00:05:37.803 START TEST accel_xor 00:05:37.803 ************************************ 00:05:37.803 11:29:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:37.803 11:29:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:37.803 [2024-07-12 11:29:40.916538] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:37.803 [2024-07-12 11:29:40.916672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61618 ] 00:05:37.803 [2024-07-12 11:29:41.057768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.803 [2024-07-12 11:29:41.170761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.803 11:29:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.178 ************************************ 00:05:39.178 END TEST accel_xor 00:05:39.178 ************************************ 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.178 11:29:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.179 00:05:39.179 real 0m1.508s 00:05:39.179 user 0m1.299s 00:05:39.179 sys 0m0.116s 00:05:39.179 11:29:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.179 11:29:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:39.179 11:29:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.179 11:29:42 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:39.179 11:29:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:39.179 11:29:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.179 11:29:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.179 ************************************ 00:05:39.179 START TEST accel_xor 00:05:39.179 ************************************ 00:05:39.179 11:29:42 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:39.179 11:29:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:39.179 [2024-07-12 11:29:42.481879] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:39.179 [2024-07-12 11:29:42.481982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61658 ] 00:05:39.179 [2024-07-12 11:29:42.624757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.437 [2024-07-12 11:29:42.754622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.437 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.438 11:29:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.835 ************************************ 00:05:40.835 END TEST accel_xor 00:05:40.835 ************************************ 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:40.835 11:29:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.835 00:05:40.835 real 0m1.533s 00:05:40.835 user 0m1.314s 00:05:40.835 sys 0m0.123s 00:05:40.835 11:29:43 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.835 11:29:43 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:40.835 11:29:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.835 11:29:44 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:40.835 11:29:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:40.836 11:29:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.836 11:29:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.836 ************************************ 00:05:40.836 START TEST accel_dif_verify 00:05:40.836 ************************************ 00:05:40.836 11:29:44 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:40.836 11:29:44 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:40.836 [2024-07-12 11:29:44.062774] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:40.836 [2024-07-12 11:29:44.062861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61687 ] 00:05:40.836 [2024-07-12 11:29:44.199888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.095 [2024-07-12 11:29:44.332904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:41.095 11:29:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:42.472 11:29:45 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.472 00:05:42.472 real 0m1.524s 00:05:42.472 user 0m1.308s 00:05:42.472 sys 0m0.121s 00:05:42.472 11:29:45 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.472 11:29:45 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:42.472 ************************************ 00:05:42.472 END TEST accel_dif_verify 00:05:42.472 ************************************ 00:05:42.472 11:29:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.472 11:29:45 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:42.472 11:29:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:42.472 11:29:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.472 11:29:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.472 ************************************ 00:05:42.472 START TEST accel_dif_generate 00:05:42.472 ************************************ 00:05:42.472 11:29:45 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:42.472 11:29:45 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:42.472 [2024-07-12 11:29:45.639569] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:42.472 [2024-07-12 11:29:45.639695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61727 ] 00:05:42.472 [2024-07-12 11:29:45.773697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.472 [2024-07-12 11:29:45.891135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.732 11:29:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.733 11:29:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.733 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.733 11:29:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.687 ************************************ 00:05:43.687 END TEST accel_dif_generate 00:05:43.687 ************************************ 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:43.687 11:29:47 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.687 00:05:43.687 real 0m1.502s 00:05:43.687 user 0m1.284s 00:05:43.687 sys 0m0.124s 00:05:43.687 11:29:47 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.687 11:29:47 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:43.951 11:29:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.951 11:29:47 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:43.951 11:29:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:43.951 11:29:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.951 11:29:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.951 ************************************ 00:05:43.951 START TEST accel_dif_generate_copy 00:05:43.951 ************************************ 00:05:43.951 11:29:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:43.951 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:43.951 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:43.951 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.951 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:43.952 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:43.952 [2024-07-12 11:29:47.186462] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:43.952 [2024-07-12 11:29:47.186548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61756 ] 00:05:43.952 [2024-07-12 11:29:47.320841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.217 [2024-07-12 11:29:47.439121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:44.217 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.218 11:29:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.592 00:05:45.592 real 0m1.513s 00:05:45.592 user 0m1.304s 00:05:45.592 sys 0m0.116s 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.592 ************************************ 00:05:45.592 END TEST accel_dif_generate_copy 00:05:45.592 ************************************ 00:05:45.592 11:29:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:45.592 11:29:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.592 11:29:48 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:45.592 11:29:48 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.592 11:29:48 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:45.592 11:29:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.592 11:29:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.592 ************************************ 00:05:45.592 START TEST accel_comp 00:05:45.592 ************************************ 00:05:45.592 11:29:48 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.592 11:29:48 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:45.592 11:29:48 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:45.592 11:29:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.592 11:29:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.592 11:29:48 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.592 11:29:48 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.592 11:29:48 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:45.593 11:29:48 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.593 11:29:48 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.593 11:29:48 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.593 11:29:48 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.593 11:29:48 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.593 11:29:48 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:45.593 11:29:48 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:45.593 [2024-07-12 11:29:48.753828] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:45.593 [2024-07-12 11:29:48.753936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61796 ] 00:05:45.593 [2024-07-12 11:29:48.893313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.593 [2024-07-12 11:29:49.006471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.851 11:29:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:46.793 11:29:50 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.793 00:05:46.793 real 0m1.506s 00:05:46.793 user 0m1.293s 00:05:46.793 sys 0m0.121s 00:05:46.793 11:29:50 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.793 11:29:50 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:46.793 ************************************ 00:05:46.793 END TEST accel_comp 00:05:46.793 ************************************ 00:05:47.052 11:29:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.052 11:29:50 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.052 11:29:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:47.052 11:29:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.052 11:29:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.052 ************************************ 00:05:47.052 START TEST accel_decomp 00:05:47.052 ************************************ 00:05:47.052 11:29:50 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:47.052 11:29:50 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:47.052 [2024-07-12 11:29:50.304702] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:47.052 [2024-07-12 11:29:50.305262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61831 ] 00:05:47.052 [2024-07-12 11:29:50.444910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.311 [2024-07-12 11:29:50.564850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.311 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.312 11:29:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:48.685 11:29:51 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.685 00:05:48.685 real 0m1.507s 00:05:48.685 user 0m0.011s 00:05:48.685 sys 0m0.003s 00:05:48.685 11:29:51 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.685 ************************************ 00:05:48.685 11:29:51 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:48.685 END TEST accel_decomp 00:05:48.685 ************************************ 00:05:48.685 11:29:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.685 11:29:51 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:48.685 11:29:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:48.685 11:29:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.685 11:29:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.685 ************************************ 00:05:48.685 START TEST accel_decomp_full 00:05:48.686 ************************************ 00:05:48.686 11:29:51 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:48.686 11:29:51 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:48.686 [2024-07-12 11:29:51.859389] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:48.686 [2024-07-12 11:29:51.859486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61867 ] 00:05:48.686 [2024-07-12 11:29:51.997840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.686 [2024-07-12 11:29:52.123340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.943 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.944 11:29:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.319 ************************************ 00:05:50.319 END TEST accel_decomp_full 00:05:50.319 ************************************ 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:50.319 11:29:53 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.319 00:05:50.319 real 0m1.530s 00:05:50.319 user 0m1.315s 00:05:50.319 sys 0m0.123s 00:05:50.319 11:29:53 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.319 11:29:53 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:50.319 11:29:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.319 11:29:53 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:50.319 11:29:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:50.319 11:29:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.319 11:29:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.319 ************************************ 00:05:50.319 START TEST accel_decomp_mcore 00:05:50.319 ************************************ 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:50.319 [2024-07-12 11:29:53.440188] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:50.319 [2024-07-12 11:29:53.440291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61902 ] 00:05:50.319 [2024-07-12 11:29:53.577596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.319 [2024-07-12 11:29:53.696996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.319 [2024-07-12 11:29:53.697132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.319 [2024-07-12 11:29:53.697262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.319 [2024-07-12 11:29:53.697429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.578 11:29:53 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.512 ************************************ 00:05:51.512 END TEST accel_decomp_mcore 00:05:51.512 ************************************ 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.512 00:05:51.512 real 0m1.516s 00:05:51.512 user 0m4.687s 00:05:51.512 sys 0m0.130s 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.512 11:29:54 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:51.770 11:29:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.770 11:29:54 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:51.770 11:29:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:51.770 11:29:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.770 11:29:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.770 ************************************ 00:05:51.770 START TEST accel_decomp_full_mcore 00:05:51.770 ************************************ 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:51.770 11:29:54 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:51.770 [2024-07-12 11:29:55.007803] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:51.770 [2024-07-12 11:29:55.007911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61939 ] 00:05:51.770 [2024-07-12 11:29:55.146802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.029 [2024-07-12 11:29:55.265946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.029 [2024-07-12 11:29:55.266081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.029 [2024-07-12 11:29:55.266183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.029 [2024-07-12 11:29:55.266340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:52.029 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.030 11:29:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.474 00:05:53.474 real 0m1.539s 00:05:53.474 user 0m4.755s 00:05:53.474 sys 0m0.126s 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.474 11:29:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:53.474 ************************************ 00:05:53.474 END TEST accel_decomp_full_mcore 00:05:53.474 ************************************ 00:05:53.474 11:29:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.474 11:29:56 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:53.474 11:29:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:53.474 11:29:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.474 11:29:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.474 ************************************ 00:05:53.474 START TEST accel_decomp_mthread 00:05:53.474 ************************************ 00:05:53.474 11:29:56 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:53.474 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:53.474 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:53.474 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:53.475 [2024-07-12 11:29:56.594141] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:53.475 [2024-07-12 11:29:56.594224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61977 ] 00:05:53.475 [2024-07-12 11:29:56.727724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.475 [2024-07-12 11:29:56.842697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.475 11:29:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.850 00:05:54.850 real 0m1.500s 00:05:54.850 user 0m1.286s 00:05:54.850 sys 0m0.119s 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.850 ************************************ 00:05:54.850 END TEST accel_decomp_mthread 00:05:54.850 ************************************ 00:05:54.850 11:29:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:54.850 11:29:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.850 11:29:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:54.850 11:29:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:54.850 11:29:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.850 11:29:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.850 ************************************ 00:05:54.850 START TEST accel_decomp_full_mthread 00:05:54.850 ************************************ 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:54.850 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:54.850 [2024-07-12 11:29:58.142726] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:54.850 [2024-07-12 11:29:58.142822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62011 ] 00:05:54.850 [2024-07-12 11:29:58.280342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.109 [2024-07-12 11:29:58.393795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.109 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.109 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.109 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.109 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.109 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.109 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.109 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.109 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.110 11:29:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.495 00:05:56.495 real 0m1.534s 00:05:56.495 user 0m1.326s 00:05:56.495 sys 0m0.114s 00:05:56.495 ************************************ 00:05:56.495 END TEST accel_decomp_full_mthread 00:05:56.495 ************************************ 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.495 11:29:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:56.495 11:29:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.495 11:29:59 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:56.495 11:29:59 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:56.495 11:29:59 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:56.495 11:29:59 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:56.495 11:29:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.495 11:29:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.495 11:29:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.495 11:29:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.495 11:29:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.495 11:29:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.495 11:29:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.496 11:29:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:56.496 11:29:59 accel -- accel/accel.sh@41 -- # jq -r . 00:05:56.496 ************************************ 00:05:56.496 START TEST accel_dif_functional_tests 00:05:56.496 ************************************ 00:05:56.496 11:29:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:56.496 [2024-07-12 11:29:59.750232] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:56.496 [2024-07-12 11:29:59.750336] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62049 ] 00:05:56.496 [2024-07-12 11:29:59.890964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.754 [2024-07-12 11:30:00.015552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.754 [2024-07-12 11:30:00.015693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.754 [2024-07-12 11:30:00.015698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.754 [2024-07-12 11:30:00.070017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.754 00:05:56.754 00:05:56.754 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.754 http://cunit.sourceforge.net/ 00:05:56.754 00:05:56.754 00:05:56.754 Suite: accel_dif 00:05:56.754 Test: verify: DIF generated, GUARD check ...passed 00:05:56.754 Test: verify: DIF generated, APPTAG check ...passed 00:05:56.754 Test: verify: DIF generated, REFTAG check ...passed 00:05:56.754 Test: verify: DIF not generated, GUARD check ...[2024-07-12 11:30:00.107833] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:56.754 passed 00:05:56.754 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 11:30:00.108233] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:56.754 passed 00:05:56.754 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 11:30:00.108569] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:56.754 passed 00:05:56.754 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:56.754 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 11:30:00.109217] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:56.754 passed 00:05:56.754 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:56.754 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:56.754 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:56.754 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 11:30:00.110073] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:56.754 passed 00:05:56.754 Test: verify copy: DIF generated, GUARD check ...passed 00:05:56.754 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:56.754 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:56.754 Test: verify copy: DIF not generated, GUARD check ...passed 00:05:56.754 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 11:30:00.110487] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:56.754 [2024-07-12 11:30:00.110561] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:56.754 passed 00:05:56.754 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 11:30:00.110610] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:56.754 passed 00:05:56.754 Test: generate copy: DIF generated, GUARD check ...passed 00:05:56.755 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:56.755 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:56.755 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:56.755 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:56.755 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:56.755 Test: generate copy: iovecs-len validate ...[2024-07-12 11:30:00.111023] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:56.755 passed 00:05:56.755 Test: generate copy: buffer alignment validate ...passed 00:05:56.755 00:05:56.755 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.755 suites 1 1 n/a 0 0 00:05:56.755 tests 26 26 26 0 0 00:05:56.755 asserts 115 115 115 0 n/a 00:05:56.755 00:05:56.755 Elapsed time = 0.005 seconds 00:05:57.013 00:05:57.013 real 0m0.630s 00:05:57.013 user 0m0.820s 00:05:57.013 sys 0m0.159s 00:05:57.013 11:30:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.013 ************************************ 00:05:57.013 END TEST accel_dif_functional_tests 00:05:57.013 ************************************ 00:05:57.013 11:30:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:57.013 11:30:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.013 00:05:57.013 real 0m35.043s 00:05:57.013 user 0m36.768s 00:05:57.013 sys 0m3.984s 00:05:57.013 ************************************ 00:05:57.013 END TEST accel 00:05:57.013 ************************************ 00:05:57.013 11:30:00 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.013 11:30:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.013 11:30:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.013 11:30:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:57.013 11:30:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.013 11:30:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.013 11:30:00 -- common/autotest_common.sh@10 -- # set +x 00:05:57.013 ************************************ 00:05:57.013 START TEST accel_rpc 00:05:57.013 ************************************ 00:05:57.013 11:30:00 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:57.271 * Looking for test storage... 00:05:57.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:57.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.271 11:30:00 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.271 11:30:00 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62119 00:05:57.271 11:30:00 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62119 00:05:57.271 11:30:00 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:57.271 11:30:00 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62119 ']' 00:05:57.271 11:30:00 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.271 11:30:00 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.271 11:30:00 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.271 11:30:00 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.271 11:30:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.271 [2024-07-12 11:30:00.560996] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:57.271 [2024-07-12 11:30:00.561078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62119 ] 00:05:57.271 [2024-07-12 11:30:00.694595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.529 [2024-07-12 11:30:00.811156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.465 11:30:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:58.465 11:30:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:58.465 11:30:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:58.465 11:30:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:58.465 11:30:01 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 ************************************ 00:05:58.465 START TEST accel_assign_opcode 00:05:58.465 ************************************ 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 [2024-07-12 11:30:01.579703] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 [2024-07-12 11:30:01.587687] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 [2024-07-12 11:30:01.649845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.465 software 00:05:58.465 00:05:58.465 real 0m0.287s 00:05:58.465 user 0m0.050s 00:05:58.465 sys 0m0.006s 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.465 ************************************ 00:05:58.465 END TEST accel_assign_opcode 00:05:58.465 ************************************ 00:05:58.465 11:30:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:58.465 11:30:01 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62119 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62119 ']' 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62119 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.465 11:30:01 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62119 00:05:58.724 killing process with pid 62119 00:05:58.724 11:30:01 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.724 11:30:01 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.724 11:30:01 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62119' 00:05:58.724 11:30:01 accel_rpc -- common/autotest_common.sh@967 -- # kill 62119 00:05:58.724 11:30:01 accel_rpc -- common/autotest_common.sh@972 -- # wait 62119 00:05:58.982 00:05:58.982 real 0m1.899s 00:05:58.982 user 0m2.035s 00:05:58.982 sys 0m0.417s 00:05:58.982 11:30:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.982 ************************************ 00:05:58.982 END TEST accel_rpc 00:05:58.982 ************************************ 00:05:58.982 11:30:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.982 11:30:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.982 11:30:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:58.982 11:30:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.982 11:30:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.982 11:30:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.982 ************************************ 00:05:58.982 START TEST app_cmdline 00:05:58.982 ************************************ 00:05:58.982 11:30:02 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:59.241 * Looking for test storage... 00:05:59.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:59.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.241 11:30:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:59.241 11:30:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62206 00:05:59.241 11:30:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62206 00:05:59.241 11:30:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:59.241 11:30:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62206 ']' 00:05:59.241 11:30:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.241 11:30:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.241 11:30:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.241 11:30:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.241 11:30:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:59.241 [2024-07-12 11:30:02.517894] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:05:59.241 [2024-07-12 11:30:02.517999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62206 ] 00:05:59.241 [2024-07-12 11:30:02.657093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.499 [2024-07-12 11:30:02.770592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.499 [2024-07-12 11:30:02.823651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.064 11:30:03 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.064 11:30:03 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:00.064 11:30:03 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:00.323 { 00:06:00.323 "version": "SPDK v24.09-pre git sha1 aebb775b1", 00:06:00.323 "fields": { 00:06:00.323 "major": 24, 00:06:00.323 "minor": 9, 00:06:00.323 "patch": 0, 00:06:00.323 "suffix": "-pre", 00:06:00.323 "commit": "aebb775b1" 00:06:00.323 } 00:06:00.323 } 00:06:00.323 11:30:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:00.323 11:30:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:00.323 11:30:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:00.323 11:30:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:00.323 11:30:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:00.323 11:30:03 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.323 11:30:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:00.323 11:30:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.323 11:30:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:00.323 11:30:03 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.582 11:30:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:00.582 11:30:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:00.582 11:30:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:00.582 11:30:03 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.582 request: 00:06:00.582 { 00:06:00.582 "method": "env_dpdk_get_mem_stats", 00:06:00.582 "req_id": 1 00:06:00.582 } 00:06:00.582 Got JSON-RPC error response 00:06:00.582 response: 00:06:00.582 { 00:06:00.582 "code": -32601, 00:06:00.582 "message": "Method not found" 00:06:00.582 } 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.841 11:30:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62206 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62206 ']' 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62206 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62206 00:06:00.841 killing process with pid 62206 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62206' 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@967 -- # kill 62206 00:06:00.841 11:30:04 app_cmdline -- common/autotest_common.sh@972 -- # wait 62206 00:06:01.100 00:06:01.100 real 0m2.098s 00:06:01.100 user 0m2.596s 00:06:01.100 sys 0m0.471s 00:06:01.100 11:30:04 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.100 11:30:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.100 ************************************ 00:06:01.100 END TEST app_cmdline 00:06:01.100 ************************************ 00:06:01.100 11:30:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.100 11:30:04 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:01.100 11:30:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.100 11:30:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.100 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.100 ************************************ 00:06:01.100 START TEST version 00:06:01.100 ************************************ 00:06:01.100 11:30:04 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:01.359 * Looking for test storage... 00:06:01.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:01.359 11:30:04 version -- app/version.sh@17 -- # get_header_version major 00:06:01.359 11:30:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.359 11:30:04 version -- app/version.sh@14 -- # cut -f2 00:06:01.359 11:30:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.359 11:30:04 version -- app/version.sh@17 -- # major=24 00:06:01.359 11:30:04 version -- app/version.sh@18 -- # get_header_version minor 00:06:01.359 11:30:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.359 11:30:04 version -- app/version.sh@14 -- # cut -f2 00:06:01.359 11:30:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.360 11:30:04 version -- app/version.sh@18 -- # minor=9 00:06:01.360 11:30:04 version -- app/version.sh@19 -- # get_header_version patch 00:06:01.360 11:30:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.360 11:30:04 version -- app/version.sh@14 -- # cut -f2 00:06:01.360 11:30:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.360 11:30:04 version -- app/version.sh@19 -- # patch=0 00:06:01.360 11:30:04 version -- app/version.sh@20 -- # get_header_version suffix 00:06:01.360 11:30:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.360 11:30:04 version -- app/version.sh@14 -- # cut -f2 00:06:01.360 11:30:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.360 11:30:04 version -- app/version.sh@20 -- # suffix=-pre 00:06:01.360 11:30:04 version -- app/version.sh@22 -- # version=24.9 00:06:01.360 11:30:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:01.360 11:30:04 version -- app/version.sh@28 -- # version=24.9rc0 00:06:01.360 11:30:04 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:01.360 11:30:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:01.360 11:30:04 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:01.360 11:30:04 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:01.360 ************************************ 00:06:01.360 END TEST version 00:06:01.360 ************************************ 00:06:01.360 00:06:01.360 real 0m0.140s 00:06:01.360 user 0m0.082s 00:06:01.360 sys 0m0.089s 00:06:01.360 11:30:04 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.360 11:30:04 version -- common/autotest_common.sh@10 -- # set +x 00:06:01.360 11:30:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.360 11:30:04 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:01.360 11:30:04 -- spdk/autotest.sh@198 -- # uname -s 00:06:01.360 11:30:04 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:01.360 11:30:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:01.360 11:30:04 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:01.360 11:30:04 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:01.360 11:30:04 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:01.360 11:30:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.360 11:30:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.360 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:06:01.360 ************************************ 00:06:01.360 START TEST spdk_dd 00:06:01.360 ************************************ 00:06:01.360 11:30:04 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:01.360 * Looking for test storage... 00:06:01.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:01.360 11:30:04 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.360 11:30:04 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.360 11:30:04 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.360 11:30:04 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.360 11:30:04 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.360 11:30:04 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.360 11:30:04 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.360 11:30:04 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:01.360 11:30:04 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.360 11:30:04 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:01.930 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:01.930 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:01.930 11:30:05 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:01.930 11:30:05 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:01.930 11:30:05 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:01.930 11:30:05 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.930 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:01.931 * spdk_dd linked to liburing 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:01.931 11:30:05 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:01.931 11:30:05 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:01.931 11:30:05 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:01.931 11:30:05 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:01.931 11:30:05 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:01.931 11:30:05 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:01.931 11:30:05 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:01.931 11:30:05 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:01.931 11:30:05 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:01.931 11:30:05 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:01.932 11:30:05 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:01.932 11:30:05 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:01.932 11:30:05 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:01.932 11:30:05 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:01.932 11:30:05 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:01.932 11:30:05 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:01.932 11:30:05 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:01.932 11:30:05 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:01.932 11:30:05 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:01.932 11:30:05 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.932 11:30:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:01.932 ************************************ 00:06:01.932 START TEST spdk_dd_basic_rw 00:06:01.932 ************************************ 00:06:01.932 11:30:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:01.932 * Looking for test storage... 00:06:02.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:02.193 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:02.194 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:02.194 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.195 ************************************ 00:06:02.195 START TEST dd_bs_lt_native_bs 00:06:02.195 ************************************ 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:02.195 11:30:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:02.453 { 00:06:02.453 "subsystems": [ 00:06:02.453 { 00:06:02.453 "subsystem": "bdev", 00:06:02.453 "config": [ 00:06:02.453 { 00:06:02.453 "params": { 00:06:02.453 "trtype": "pcie", 00:06:02.453 "traddr": "0000:00:10.0", 00:06:02.453 "name": "Nvme0" 00:06:02.453 }, 00:06:02.453 "method": "bdev_nvme_attach_controller" 00:06:02.453 }, 00:06:02.453 { 00:06:02.453 "method": "bdev_wait_for_examine" 00:06:02.453 } 00:06:02.453 ] 00:06:02.453 } 00:06:02.453 ] 00:06:02.453 } 00:06:02.453 [2024-07-12 11:30:05.664873] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:02.453 [2024-07-12 11:30:05.665004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62532 ] 00:06:02.453 [2024-07-12 11:30:05.810825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.711 [2024-07-12 11:30:05.942426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.711 [2024-07-12 11:30:06.001342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.711 [2024-07-12 11:30:06.113029] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:02.711 [2024-07-12 11:30:06.113097] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.971 [2024-07-12 11:30:06.244270] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:02.971 ************************************ 00:06:02.971 END TEST dd_bs_lt_native_bs 00:06:02.971 ************************************ 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.971 00:06:02.971 real 0m0.755s 00:06:02.971 user 0m0.544s 00:06:02.971 sys 0m0.179s 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.971 ************************************ 00:06:02.971 START TEST dd_rw 00:06:02.971 ************************************ 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:02.971 11:30:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.905 11:30:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:03.905 11:30:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:03.905 11:30:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.905 11:30:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.905 [2024-07-12 11:30:07.127997] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:03.905 [2024-07-12 11:30:07.128112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62563 ] 00:06:03.905 { 00:06:03.905 "subsystems": [ 00:06:03.905 { 00:06:03.905 "subsystem": "bdev", 00:06:03.905 "config": [ 00:06:03.905 { 00:06:03.905 "params": { 00:06:03.905 "trtype": "pcie", 00:06:03.905 "traddr": "0000:00:10.0", 00:06:03.905 "name": "Nvme0" 00:06:03.905 }, 00:06:03.905 "method": "bdev_nvme_attach_controller" 00:06:03.905 }, 00:06:03.905 { 00:06:03.905 "method": "bdev_wait_for_examine" 00:06:03.905 } 00:06:03.905 ] 00:06:03.905 } 00:06:03.905 ] 00:06:03.905 } 00:06:03.905 [2024-07-12 11:30:07.267774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.165 [2024-07-12 11:30:07.383862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.165 [2024-07-12 11:30:07.436431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.427  Copying: 60/60 [kB] (average 19 MBps) 00:06:04.427 00:06:04.427 11:30:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:04.427 11:30:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:04.427 11:30:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.427 11:30:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.427 [2024-07-12 11:30:07.822728] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:04.427 [2024-07-12 11:30:07.822848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62582 ] 00:06:04.427 { 00:06:04.427 "subsystems": [ 00:06:04.427 { 00:06:04.427 "subsystem": "bdev", 00:06:04.427 "config": [ 00:06:04.427 { 00:06:04.427 "params": { 00:06:04.427 "trtype": "pcie", 00:06:04.427 "traddr": "0000:00:10.0", 00:06:04.427 "name": "Nvme0" 00:06:04.427 }, 00:06:04.427 "method": "bdev_nvme_attach_controller" 00:06:04.427 }, 00:06:04.427 { 00:06:04.427 "method": "bdev_wait_for_examine" 00:06:04.427 } 00:06:04.427 ] 00:06:04.427 } 00:06:04.427 ] 00:06:04.427 } 00:06:04.686 [2024-07-12 11:30:07.961631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.686 [2024-07-12 11:30:08.079679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.686 [2024-07-12 11:30:08.134366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.204  Copying: 60/60 [kB] (average 19 MBps) 00:06:05.204 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:05.204 11:30:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.204 [2024-07-12 11:30:08.509798] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:05.204 [2024-07-12 11:30:08.510104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62593 ] 00:06:05.204 { 00:06:05.204 "subsystems": [ 00:06:05.204 { 00:06:05.204 "subsystem": "bdev", 00:06:05.204 "config": [ 00:06:05.204 { 00:06:05.204 "params": { 00:06:05.204 "trtype": "pcie", 00:06:05.204 "traddr": "0000:00:10.0", 00:06:05.204 "name": "Nvme0" 00:06:05.204 }, 00:06:05.204 "method": "bdev_nvme_attach_controller" 00:06:05.204 }, 00:06:05.204 { 00:06:05.204 "method": "bdev_wait_for_examine" 00:06:05.204 } 00:06:05.204 ] 00:06:05.204 } 00:06:05.204 ] 00:06:05.204 } 00:06:05.204 [2024-07-12 11:30:08.642528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.462 [2024-07-12 11:30:08.770821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.462 [2024-07-12 11:30:08.829633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.721  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:05.721 00:06:05.721 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:05.721 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:05.721 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:05.721 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:05.721 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:05.721 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:05.721 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.657 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:06.657 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:06.657 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.657 11:30:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.657 [2024-07-12 11:30:09.858894] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:06.657 [2024-07-12 11:30:09.859008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62623 ] 00:06:06.657 { 00:06:06.657 "subsystems": [ 00:06:06.657 { 00:06:06.657 "subsystem": "bdev", 00:06:06.657 "config": [ 00:06:06.657 { 00:06:06.657 "params": { 00:06:06.657 "trtype": "pcie", 00:06:06.657 "traddr": "0000:00:10.0", 00:06:06.657 "name": "Nvme0" 00:06:06.657 }, 00:06:06.657 "method": "bdev_nvme_attach_controller" 00:06:06.657 }, 00:06:06.657 { 00:06:06.657 "method": "bdev_wait_for_examine" 00:06:06.657 } 00:06:06.657 ] 00:06:06.657 } 00:06:06.657 ] 00:06:06.657 } 00:06:06.657 [2024-07-12 11:30:10.001774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.916 [2024-07-12 11:30:10.131023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.916 [2024-07-12 11:30:10.185243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.175  Copying: 60/60 [kB] (average 58 MBps) 00:06:07.175 00:06:07.175 11:30:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:07.175 11:30:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:07.175 11:30:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.175 11:30:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.175 [2024-07-12 11:30:10.568690] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:07.175 [2024-07-12 11:30:10.568794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62631 ] 00:06:07.175 { 00:06:07.175 "subsystems": [ 00:06:07.175 { 00:06:07.175 "subsystem": "bdev", 00:06:07.175 "config": [ 00:06:07.175 { 00:06:07.175 "params": { 00:06:07.175 "trtype": "pcie", 00:06:07.175 "traddr": "0000:00:10.0", 00:06:07.175 "name": "Nvme0" 00:06:07.175 }, 00:06:07.175 "method": "bdev_nvme_attach_controller" 00:06:07.175 }, 00:06:07.175 { 00:06:07.175 "method": "bdev_wait_for_examine" 00:06:07.175 } 00:06:07.175 ] 00:06:07.175 } 00:06:07.175 ] 00:06:07.175 } 00:06:07.434 [2024-07-12 11:30:10.700481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.434 [2024-07-12 11:30:10.814229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.434 [2024-07-12 11:30:10.867184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.992  Copying: 60/60 [kB] (average 58 MBps) 00:06:07.992 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.992 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 { 00:06:07.992 "subsystems": [ 00:06:07.992 { 00:06:07.992 "subsystem": "bdev", 00:06:07.992 "config": [ 00:06:07.992 { 00:06:07.992 "params": { 00:06:07.992 "trtype": "pcie", 00:06:07.992 "traddr": "0000:00:10.0", 00:06:07.992 "name": "Nvme0" 00:06:07.992 }, 00:06:07.992 "method": "bdev_nvme_attach_controller" 00:06:07.992 }, 00:06:07.992 { 00:06:07.992 "method": "bdev_wait_for_examine" 00:06:07.992 } 00:06:07.992 ] 00:06:07.992 } 00:06:07.992 ] 00:06:07.992 } 00:06:07.992 [2024-07-12 11:30:11.251190] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:07.992 [2024-07-12 11:30:11.251457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62652 ] 00:06:07.992 [2024-07-12 11:30:11.389058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.250 [2024-07-12 11:30:11.496810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.250 [2024-07-12 11:30:11.549345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.508  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:08.508 00:06:08.508 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:08.508 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:08.508 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:08.508 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:08.508 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:08.508 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:08.508 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:08.508 11:30:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.074 11:30:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:09.074 11:30:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:09.074 11:30:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.074 11:30:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.074 [2024-07-12 11:30:12.515347] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:09.074 [2024-07-12 11:30:12.515724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62671 ] 00:06:09.074 { 00:06:09.074 "subsystems": [ 00:06:09.074 { 00:06:09.074 "subsystem": "bdev", 00:06:09.074 "config": [ 00:06:09.074 { 00:06:09.074 "params": { 00:06:09.074 "trtype": "pcie", 00:06:09.074 "traddr": "0000:00:10.0", 00:06:09.074 "name": "Nvme0" 00:06:09.074 }, 00:06:09.074 "method": "bdev_nvme_attach_controller" 00:06:09.074 }, 00:06:09.074 { 00:06:09.074 "method": "bdev_wait_for_examine" 00:06:09.074 } 00:06:09.074 ] 00:06:09.074 } 00:06:09.074 ] 00:06:09.074 } 00:06:09.332 [2024-07-12 11:30:12.650553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.590 [2024-07-12 11:30:12.806565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.590 [2024-07-12 11:30:12.861126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.846  Copying: 56/56 [kB] (average 54 MBps) 00:06:09.846 00:06:09.846 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:09.846 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:09.846 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.846 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.846 [2024-07-12 11:30:13.249866] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:09.846 [2024-07-12 11:30:13.250152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62690 ] 00:06:09.846 { 00:06:09.846 "subsystems": [ 00:06:09.846 { 00:06:09.846 "subsystem": "bdev", 00:06:09.846 "config": [ 00:06:09.846 { 00:06:09.846 "params": { 00:06:09.846 "trtype": "pcie", 00:06:09.846 "traddr": "0000:00:10.0", 00:06:09.846 "name": "Nvme0" 00:06:09.846 }, 00:06:09.846 "method": "bdev_nvme_attach_controller" 00:06:09.846 }, 00:06:09.846 { 00:06:09.846 "method": "bdev_wait_for_examine" 00:06:09.846 } 00:06:09.846 ] 00:06:09.846 } 00:06:09.846 ] 00:06:09.846 } 00:06:10.104 [2024-07-12 11:30:13.387136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.104 [2024-07-12 11:30:13.491028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.104 [2024-07-12 11:30:13.544971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.620  Copying: 56/56 [kB] (average 27 MBps) 00:06:10.620 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.620 11:30:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.620 [2024-07-12 11:30:13.930364] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:10.620 [2024-07-12 11:30:13.930450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62710 ] 00:06:10.620 { 00:06:10.620 "subsystems": [ 00:06:10.620 { 00:06:10.620 "subsystem": "bdev", 00:06:10.620 "config": [ 00:06:10.620 { 00:06:10.620 "params": { 00:06:10.620 "trtype": "pcie", 00:06:10.620 "traddr": "0000:00:10.0", 00:06:10.620 "name": "Nvme0" 00:06:10.620 }, 00:06:10.620 "method": "bdev_nvme_attach_controller" 00:06:10.620 }, 00:06:10.620 { 00:06:10.620 "method": "bdev_wait_for_examine" 00:06:10.620 } 00:06:10.620 ] 00:06:10.620 } 00:06:10.620 ] 00:06:10.620 } 00:06:10.620 [2024-07-12 11:30:14.064937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.879 [2024-07-12 11:30:14.186187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.879 [2024-07-12 11:30:14.243261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.137  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:11.137 00:06:11.137 11:30:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:11.137 11:30:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:11.137 11:30:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:11.137 11:30:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:11.137 11:30:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:11.137 11:30:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:11.137 11:30:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.070 11:30:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:12.070 11:30:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:12.070 11:30:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.070 11:30:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.070 [2024-07-12 11:30:15.224325] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:12.070 [2024-07-12 11:30:15.224729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62730 ] 00:06:12.070 { 00:06:12.070 "subsystems": [ 00:06:12.070 { 00:06:12.070 "subsystem": "bdev", 00:06:12.070 "config": [ 00:06:12.070 { 00:06:12.070 "params": { 00:06:12.070 "trtype": "pcie", 00:06:12.070 "traddr": "0000:00:10.0", 00:06:12.070 "name": "Nvme0" 00:06:12.070 }, 00:06:12.070 "method": "bdev_nvme_attach_controller" 00:06:12.070 }, 00:06:12.070 { 00:06:12.070 "method": "bdev_wait_for_examine" 00:06:12.070 } 00:06:12.070 ] 00:06:12.070 } 00:06:12.070 ] 00:06:12.070 } 00:06:12.070 [2024-07-12 11:30:15.360938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.070 [2024-07-12 11:30:15.477080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.328 [2024-07-12 11:30:15.534311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.586  Copying: 56/56 [kB] (average 54 MBps) 00:06:12.586 00:06:12.586 11:30:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:12.586 11:30:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:12.586 11:30:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.586 11:30:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.586 [2024-07-12 11:30:15.931108] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:12.586 [2024-07-12 11:30:15.931747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62744 ] 00:06:12.586 { 00:06:12.586 "subsystems": [ 00:06:12.586 { 00:06:12.586 "subsystem": "bdev", 00:06:12.586 "config": [ 00:06:12.586 { 00:06:12.586 "params": { 00:06:12.586 "trtype": "pcie", 00:06:12.586 "traddr": "0000:00:10.0", 00:06:12.586 "name": "Nvme0" 00:06:12.586 }, 00:06:12.586 "method": "bdev_nvme_attach_controller" 00:06:12.586 }, 00:06:12.586 { 00:06:12.586 "method": "bdev_wait_for_examine" 00:06:12.586 } 00:06:12.586 ] 00:06:12.586 } 00:06:12.586 ] 00:06:12.586 } 00:06:12.845 [2024-07-12 11:30:16.074079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.845 [2024-07-12 11:30:16.211610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.845 [2024-07-12 11:30:16.269892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.362  Copying: 56/56 [kB] (average 54 MBps) 00:06:13.362 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.362 11:30:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.362 { 00:06:13.362 "subsystems": [ 00:06:13.362 { 00:06:13.362 "subsystem": "bdev", 00:06:13.362 "config": [ 00:06:13.362 { 00:06:13.362 "params": { 00:06:13.362 "trtype": "pcie", 00:06:13.362 "traddr": "0000:00:10.0", 00:06:13.362 "name": "Nvme0" 00:06:13.362 }, 00:06:13.362 "method": "bdev_nvme_attach_controller" 00:06:13.362 }, 00:06:13.362 { 00:06:13.362 "method": "bdev_wait_for_examine" 00:06:13.362 } 00:06:13.362 ] 00:06:13.362 } 00:06:13.362 ] 00:06:13.362 } 00:06:13.362 [2024-07-12 11:30:16.670359] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:13.362 [2024-07-12 11:30:16.670452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62759 ] 00:06:13.362 [2024-07-12 11:30:16.806537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.619 [2024-07-12 11:30:16.916397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.619 [2024-07-12 11:30:16.971500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.876  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:13.876 00:06:13.876 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:13.876 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:13.876 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:13.876 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:13.876 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:13.876 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:13.876 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:13.876 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.472 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:14.472 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:14.472 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.472 11:30:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.472 { 00:06:14.472 "subsystems": [ 00:06:14.472 { 00:06:14.472 "subsystem": "bdev", 00:06:14.472 "config": [ 00:06:14.472 { 00:06:14.472 "params": { 00:06:14.472 "trtype": "pcie", 00:06:14.472 "traddr": "0000:00:10.0", 00:06:14.472 "name": "Nvme0" 00:06:14.472 }, 00:06:14.472 "method": "bdev_nvme_attach_controller" 00:06:14.472 }, 00:06:14.472 { 00:06:14.472 "method": "bdev_wait_for_examine" 00:06:14.472 } 00:06:14.472 ] 00:06:14.472 } 00:06:14.472 ] 00:06:14.472 } 00:06:14.472 [2024-07-12 11:30:17.861279] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:14.472 [2024-07-12 11:30:17.861558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62784 ] 00:06:14.731 [2024-07-12 11:30:18.001930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.731 [2024-07-12 11:30:18.118402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.731 [2024-07-12 11:30:18.174109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.247  Copying: 48/48 [kB] (average 46 MBps) 00:06:15.247 00:06:15.247 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:15.247 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:15.247 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.247 11:30:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.247 [2024-07-12 11:30:18.564229] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:15.247 [2024-07-12 11:30:18.564313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62798 ] 00:06:15.247 { 00:06:15.247 "subsystems": [ 00:06:15.247 { 00:06:15.247 "subsystem": "bdev", 00:06:15.247 "config": [ 00:06:15.247 { 00:06:15.247 "params": { 00:06:15.247 "trtype": "pcie", 00:06:15.247 "traddr": "0000:00:10.0", 00:06:15.247 "name": "Nvme0" 00:06:15.247 }, 00:06:15.247 "method": "bdev_nvme_attach_controller" 00:06:15.247 }, 00:06:15.247 { 00:06:15.247 "method": "bdev_wait_for_examine" 00:06:15.247 } 00:06:15.247 ] 00:06:15.247 } 00:06:15.247 ] 00:06:15.247 } 00:06:15.506 [2024-07-12 11:30:18.700135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.506 [2024-07-12 11:30:18.807342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.506 [2024-07-12 11:30:18.862871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.764  Copying: 48/48 [kB] (average 46 MBps) 00:06:15.764 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.764 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.022 [2024-07-12 11:30:19.240670] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:16.022 [2024-07-12 11:30:19.241052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62819 ] 00:06:16.022 { 00:06:16.022 "subsystems": [ 00:06:16.022 { 00:06:16.022 "subsystem": "bdev", 00:06:16.022 "config": [ 00:06:16.022 { 00:06:16.022 "params": { 00:06:16.022 "trtype": "pcie", 00:06:16.022 "traddr": "0000:00:10.0", 00:06:16.022 "name": "Nvme0" 00:06:16.022 }, 00:06:16.022 "method": "bdev_nvme_attach_controller" 00:06:16.022 }, 00:06:16.022 { 00:06:16.022 "method": "bdev_wait_for_examine" 00:06:16.022 } 00:06:16.022 ] 00:06:16.022 } 00:06:16.022 ] 00:06:16.022 } 00:06:16.022 [2024-07-12 11:30:19.374216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.280 [2024-07-12 11:30:19.492700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.280 [2024-07-12 11:30:19.547577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.538  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:16.538 00:06:16.538 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:16.538 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:16.538 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:16.538 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:16.538 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:16.538 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:16.538 11:30:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.105 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:17.105 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:17.105 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.105 11:30:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.105 [2024-07-12 11:30:20.407407] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:17.105 [2024-07-12 11:30:20.407516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62838 ] 00:06:17.105 { 00:06:17.105 "subsystems": [ 00:06:17.105 { 00:06:17.105 "subsystem": "bdev", 00:06:17.105 "config": [ 00:06:17.105 { 00:06:17.105 "params": { 00:06:17.105 "trtype": "pcie", 00:06:17.105 "traddr": "0000:00:10.0", 00:06:17.105 "name": "Nvme0" 00:06:17.105 }, 00:06:17.105 "method": "bdev_nvme_attach_controller" 00:06:17.105 }, 00:06:17.105 { 00:06:17.105 "method": "bdev_wait_for_examine" 00:06:17.105 } 00:06:17.105 ] 00:06:17.105 } 00:06:17.105 ] 00:06:17.105 } 00:06:17.105 [2024-07-12 11:30:20.545833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.362 [2024-07-12 11:30:20.645784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.362 [2024-07-12 11:30:20.699888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.620  Copying: 48/48 [kB] (average 46 MBps) 00:06:17.620 00:06:17.620 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:17.620 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:17.620 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.620 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.878 { 00:06:17.878 "subsystems": [ 00:06:17.878 { 00:06:17.878 "subsystem": "bdev", 00:06:17.878 "config": [ 00:06:17.878 { 00:06:17.878 "params": { 00:06:17.878 "trtype": "pcie", 00:06:17.878 "traddr": "0000:00:10.0", 00:06:17.878 "name": "Nvme0" 00:06:17.878 }, 00:06:17.878 "method": "bdev_nvme_attach_controller" 00:06:17.878 }, 00:06:17.878 { 00:06:17.878 "method": "bdev_wait_for_examine" 00:06:17.878 } 00:06:17.878 ] 00:06:17.878 } 00:06:17.878 ] 00:06:17.878 } 00:06:17.878 [2024-07-12 11:30:21.085702] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:17.878 [2024-07-12 11:30:21.085809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62852 ] 00:06:17.878 [2024-07-12 11:30:21.222781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.136 [2024-07-12 11:30:21.340482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.136 [2024-07-12 11:30:21.397925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.394  Copying: 48/48 [kB] (average 46 MBps) 00:06:18.394 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.394 11:30:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.394 [2024-07-12 11:30:21.780966] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:18.394 [2024-07-12 11:30:21.781049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62867 ] 00:06:18.394 { 00:06:18.394 "subsystems": [ 00:06:18.394 { 00:06:18.394 "subsystem": "bdev", 00:06:18.394 "config": [ 00:06:18.394 { 00:06:18.394 "params": { 00:06:18.394 "trtype": "pcie", 00:06:18.394 "traddr": "0000:00:10.0", 00:06:18.394 "name": "Nvme0" 00:06:18.394 }, 00:06:18.394 "method": "bdev_nvme_attach_controller" 00:06:18.394 }, 00:06:18.394 { 00:06:18.394 "method": "bdev_wait_for_examine" 00:06:18.394 } 00:06:18.394 ] 00:06:18.394 } 00:06:18.394 ] 00:06:18.394 } 00:06:18.653 [2024-07-12 11:30:21.917789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.653 [2024-07-12 11:30:22.030195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.653 [2024-07-12 11:30:22.084819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.170  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:19.170 00:06:19.170 ************************************ 00:06:19.170 END TEST dd_rw 00:06:19.170 ************************************ 00:06:19.170 00:06:19.170 real 0m16.019s 00:06:19.170 user 0m11.982s 00:06:19.170 sys 0m5.484s 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.170 ************************************ 00:06:19.170 START TEST dd_rw_offset 00:06:19.170 ************************************ 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:19.170 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:19.171 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=jqpjzmmd0due8s2pnl1wxy0ggni548d03prp7zqadvbi38ae3sawkdqqvp4s7bwgs57djiodnk0ste8hqopa5no657qebpcb44uhnw5407nk28ol4bvg0vszlnsowiv6bvn8w79olqey6gbvvv7u0di76k4t6qn46a0wwss0z6kgokedt7ju9n5l6y8tkigrjxwfwwus540dynnee5bldbzm2jszv8u9y8k3q6reryyh6tr1p9ef0bmtlf5rgpofvz22g78z7qbkjdhm53de5zi65x9t4o1chyk08eln8rvipm6s3bbsy038pydgp7e7hxgg8awd6pgvcco0asanfolw7q1wmqfzf9erkjvvt4v3s3clj4u6gdza4iux5trh4m1yv37r232ys7i6gxkn0xo768205jy8ghxlbkmtvw3lqmqh343rs2rvk7l4aai1z5qto5izbkoo5bzv080d8ezc84bc9x0rw2rqx2vg3b9n9i4nknfid595v5q92s2wjf4hjtjdazecj8w8yflpo79g6dx82p6vfgvb3vbi2s44twnvvadf5hw9fyj73phg7jh7syha36hd50fx0ik67olus0ezpx5efkqmwz1q7ems3tlw9pgch58akl6sal0lnvzxka8xdwxyvmc6yfv57lpwsgk78tu2i7tmjg5jskgazi2icwlknw5zp5ntyjevhlihkl65h25faj7gs9p9sw3z3ty14jifxfv2xpg3ehwgx66qyhl3smux8w287wm3ehfxzbyt0wyxzejiwf4k3zvnm16cetnxwdkjtsw9zaaetm2a5rmf5r1k17kwdcithg3mf7s3qsios4v6m4oq8h6wbidsc71gk1nqu9bpogjdjvxeygy8cdx152ofsx386boiw4k5saw5bxc80cuvoern9rv83kqq2amzgg4xqysy0ctnp99dhwbqkzazht2mszt1d8m4ugtov75as9uxo6if637bcumasp2o3luinetb03ir4pq5fihi174uc305j9qejtjvrxasjd2a9t7d8p9l1nil615vi1x7b5qpmms5mmvweczh4pj69ej8rtdc9p307r45bbsw34u8m70epwfuzk69qmg40yzfuaye1v88r8yobjn66685zjg8k61rju2l0xix421t198wwy62jg6q2m5y6duphlijnrjoi0zk1mki2z3md8a9ot96en1h5jpdvz6wp9n0hy6tdjm7f871eclrat9h5j4sdyinpxd3e44v6v6axutm1xxdwhztazac2h7ixko7xuhltayd489e8qoawx512gloi52914z6db3by8zwgvaf7ee7drfdtf27swhdmbkybjoqklmfwf2kz7s4i2r7n42im1exgkfw6vzztz6sb6ow9edyj787m392ymjxlwqak3fmauj21i9gfh9gksteu69hjccfsrlk859z7zvmdxq7t4yoi2m7w9rgxovs0s797p6p1isqogo15vyp3qgn9p25z945ahilgcsddu3yuvynzue4ogy0va3v0lol8vqyhp19sqricqgchlt3wa8umkrsw9p648stozdhcbu6m1wpw59w2tubhtuwqj6d9mal2jbo6954i5p9e5sce4966e2erravzazh85cnj2hkav4wmtfiq3yb5iufae1eygzonnvspojnghvlgvtawi5gejdkkckwymnlyzdlu7xcih56ewifl1grefhnm1n7gv4ffq31ocs4m20i5s2y1p5b1dzg34q5r1mq2w4axrut6btrx90jdy5k8avnjkbegfisheuxf8cnh9e80aftw4qr0tklyevz1v4qlmiq2to0k0c4abvqx04uqwa4zx5gdapny2b8y7uqifgx538ybc4z2vaq2ve81xdj8i07s7z2vz8fsa605ijgepbwa5ogp3o51jd64pdplclcqrcgui9qz57c7nfreqpu2og7m7vze1bfhbw5up5v3nspw83gqbagb0gr7mw8kn04sn86hw2uq67139gu3kv4amk4ijptgul1k007253k5r7hiy4n9u4i8w32bjph2o5s26dsx285y7kabnbhwk3iua0q2o1uoghstxt5cb8rc62m3xp5zlakfzrof2spxfu365dd9o29zwz7t6ftpkjg9yzbnitxi0osvx2pv53h4nx02icob3rx1u1y3okz5p2ia97vxrw71hwadgyboa6k2fwcqnwdjqi9woq3bjvcvcl3wzv3xp5nhw14a827p9yyc3tdtuboaj9lj5gx126xv4gmhxwerkelddnlhh4ubbjyg8jll4tvijv39qruprbbjldms9imicyyeukpaf195378pwh2yvgou6lsl9714gu3j14tqvoc2kbn647c6ldvohby3z0ui4vtl7oulvsksv9ssaqmt6g9tyo7ra2224q6tkg3sqlukr8p0bsc6xpofuaqepfu0yyo5blq38894fifkg77if5no664qwq3vvdzl46e48fqtc1dorzhgfdabr0rv6aryjpb6kr8ytp3skmmfihjj0g45ypt53476yj1hrhhcu03tethjkyreqqilu3d47eyrcfvdqhvalqkdajv0svevm4orkee08gqigtb3h9xzoq95apkuxarzz6e1usg6bd3qjpabq6148xnn2pt8ijmls3f48hakjjlmay71a9ztrwzw4z59j3c5rnv14rd01qlpjm83xjdnntrb8afsbf3em0kan9lt45wdp2sretqcj4s4ub6z36oqoylx9cosq5u6b1iua0rpjddqpdyz4n6ue2bp8s5kw6jhyfektgy7jhh39o5ps9j0okpvmpqmplf8lgzykxav63adta2ir235kypnwymw9vc1r800kc0if6hak3qv2bkvxf0uehxe6t2rv44cjal2y4r40ijrtfomeegmxrxr45chc9rsy7azry1ex13d6igtfnw776psnr3p6ua24ycm0p8ay7dv8wy86fkz9jai87drp8x7vtugytn52awm1e34r4ct67hi6vukodu3dsu7p4ogjp3c345qdnkaf0mogr18squ01ffj9cjfiuvck9v63dic0bp3p5lnmcvouh69q2y79vzkneqws2lhhfk2llftc9zuzycwhf9rinng1o68hhkmbubkx4vsxho98p3ypism1uvhc3e3ex8gxxjt4notccctt7671ueqvd6su7kh5rtcv20ycnrvc1kp4hap5sdcrcopvmbbpipbe0ggmeug02nxma042t36s6d8h0ztrdfladq9qdkbx207ndtfeg6d3zhiqwhwbcrt2w2rm4x4hq8fkn7uyvcxi397qbgfrqiolfkqv814ucp8mbj7c03xpk5tbrive8gh9z1pt3ug3q5czn3klrurbwf53sbvs5973rm1r0rc6r566n6oikyzwus1hw1rb6ojcax98ktd3ugpgxygg0dxo0pn5r75zuf349vr32zmtblcsc78h40tdgdwf1kyi3s0shnq76ee4e39hj2jom6xhcvwddmoy9l8ns6jupw9ipcgr4226b38tm6dq0g4dg3e8vxnwsj0so4t3f26o4rxeyuoxhm0j6nmty7rmrkqx3quro5fj58474jgdcubaeu0a6rt2mjb6qtuy7tdyyc8iw65anjjf8skx8iwe7t3tg96hnwtiov0v91n1ld9zaufu7lmhul4kx1s6qxha9io5ped99yq54s773z8bd400bky80v0cp5no09pkh3lm7rfghg8e7t2n2gx447arcklal7orphmv4xfgf5z2trlop4xdg3lag05yc7j68lp9o7m0ptgj1lstliu1fc8fibp5ipwk22yl3pq98g57quf0nttwckt2yrdzo9s3o82j53y5gm3usfkwqyanezb3wewtja5qy7cj85r73gvl30axgpeofy2ovdnkrhn71soi21dblqvsgfq4lg22l8se4kg0uanrd6w1b3f6f4vcrh14qk1bhxui4r71ro3a5hb2se20ykcyv8wglwoaekdd9255q3cxbqr36bsb0jayy6ey2jon3txlie8gr8ipdy7lcdm1y3fmx3 00:06:19.171 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:19.171 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:19.171 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:19.171 11:30:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:19.171 [2024-07-12 11:30:22.581757] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:19.171 [2024-07-12 11:30:22.581857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62903 ] 00:06:19.171 { 00:06:19.171 "subsystems": [ 00:06:19.171 { 00:06:19.171 "subsystem": "bdev", 00:06:19.171 "config": [ 00:06:19.171 { 00:06:19.171 "params": { 00:06:19.171 "trtype": "pcie", 00:06:19.171 "traddr": "0000:00:10.0", 00:06:19.171 "name": "Nvme0" 00:06:19.171 }, 00:06:19.171 "method": "bdev_nvme_attach_controller" 00:06:19.171 }, 00:06:19.171 { 00:06:19.171 "method": "bdev_wait_for_examine" 00:06:19.171 } 00:06:19.171 ] 00:06:19.171 } 00:06:19.171 ] 00:06:19.171 } 00:06:19.431 [2024-07-12 11:30:22.718219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.431 [2024-07-12 11:30:22.842093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.691 [2024-07-12 11:30:22.899638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.951  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:19.951 00:06:19.951 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:19.951 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:19.951 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:19.951 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:19.951 { 00:06:19.951 "subsystems": [ 00:06:19.951 { 00:06:19.951 "subsystem": "bdev", 00:06:19.951 "config": [ 00:06:19.951 { 00:06:19.951 "params": { 00:06:19.951 "trtype": "pcie", 00:06:19.951 "traddr": "0000:00:10.0", 00:06:19.951 "name": "Nvme0" 00:06:19.951 }, 00:06:19.951 "method": "bdev_nvme_attach_controller" 00:06:19.951 }, 00:06:19.951 { 00:06:19.951 "method": "bdev_wait_for_examine" 00:06:19.951 } 00:06:19.951 ] 00:06:19.951 } 00:06:19.951 ] 00:06:19.951 } 00:06:19.951 [2024-07-12 11:30:23.296286] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:19.951 [2024-07-12 11:30:23.296405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62917 ] 00:06:20.209 [2024-07-12 11:30:23.437596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.209 [2024-07-12 11:30:23.556600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.209 [2024-07-12 11:30:23.611563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.724  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:20.724 00:06:20.724 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ jqpjzmmd0due8s2pnl1wxy0ggni548d03prp7zqadvbi38ae3sawkdqqvp4s7bwgs57djiodnk0ste8hqopa5no657qebpcb44uhnw5407nk28ol4bvg0vszlnsowiv6bvn8w79olqey6gbvvv7u0di76k4t6qn46a0wwss0z6kgokedt7ju9n5l6y8tkigrjxwfwwus540dynnee5bldbzm2jszv8u9y8k3q6reryyh6tr1p9ef0bmtlf5rgpofvz22g78z7qbkjdhm53de5zi65x9t4o1chyk08eln8rvipm6s3bbsy038pydgp7e7hxgg8awd6pgvcco0asanfolw7q1wmqfzf9erkjvvt4v3s3clj4u6gdza4iux5trh4m1yv37r232ys7i6gxkn0xo768205jy8ghxlbkmtvw3lqmqh343rs2rvk7l4aai1z5qto5izbkoo5bzv080d8ezc84bc9x0rw2rqx2vg3b9n9i4nknfid595v5q92s2wjf4hjtjdazecj8w8yflpo79g6dx82p6vfgvb3vbi2s44twnvvadf5hw9fyj73phg7jh7syha36hd50fx0ik67olus0ezpx5efkqmwz1q7ems3tlw9pgch58akl6sal0lnvzxka8xdwxyvmc6yfv57lpwsgk78tu2i7tmjg5jskgazi2icwlknw5zp5ntyjevhlihkl65h25faj7gs9p9sw3z3ty14jifxfv2xpg3ehwgx66qyhl3smux8w287wm3ehfxzbyt0wyxzejiwf4k3zvnm16cetnxwdkjtsw9zaaetm2a5rmf5r1k17kwdcithg3mf7s3qsios4v6m4oq8h6wbidsc71gk1nqu9bpogjdjvxeygy8cdx152ofsx386boiw4k5saw5bxc80cuvoern9rv83kqq2amzgg4xqysy0ctnp99dhwbqkzazht2mszt1d8m4ugtov75as9uxo6if637bcumasp2o3luinetb03ir4pq5fihi174uc305j9qejtjvrxasjd2a9t7d8p9l1nil615vi1x7b5qpmms5mmvweczh4pj69ej8rtdc9p307r45bbsw34u8m70epwfuzk69qmg40yzfuaye1v88r8yobjn66685zjg8k61rju2l0xix421t198wwy62jg6q2m5y6duphlijnrjoi0zk1mki2z3md8a9ot96en1h5jpdvz6wp9n0hy6tdjm7f871eclrat9h5j4sdyinpxd3e44v6v6axutm1xxdwhztazac2h7ixko7xuhltayd489e8qoawx512gloi52914z6db3by8zwgvaf7ee7drfdtf27swhdmbkybjoqklmfwf2kz7s4i2r7n42im1exgkfw6vzztz6sb6ow9edyj787m392ymjxlwqak3fmauj21i9gfh9gksteu69hjccfsrlk859z7zvmdxq7t4yoi2m7w9rgxovs0s797p6p1isqogo15vyp3qgn9p25z945ahilgcsddu3yuvynzue4ogy0va3v0lol8vqyhp19sqricqgchlt3wa8umkrsw9p648stozdhcbu6m1wpw59w2tubhtuwqj6d9mal2jbo6954i5p9e5sce4966e2erravzazh85cnj2hkav4wmtfiq3yb5iufae1eygzonnvspojnghvlgvtawi5gejdkkckwymnlyzdlu7xcih56ewifl1grefhnm1n7gv4ffq31ocs4m20i5s2y1p5b1dzg34q5r1mq2w4axrut6btrx90jdy5k8avnjkbegfisheuxf8cnh9e80aftw4qr0tklyevz1v4qlmiq2to0k0c4abvqx04uqwa4zx5gdapny2b8y7uqifgx538ybc4z2vaq2ve81xdj8i07s7z2vz8fsa605ijgepbwa5ogp3o51jd64pdplclcqrcgui9qz57c7nfreqpu2og7m7vze1bfhbw5up5v3nspw83gqbagb0gr7mw8kn04sn86hw2uq67139gu3kv4amk4ijptgul1k007253k5r7hiy4n9u4i8w32bjph2o5s26dsx285y7kabnbhwk3iua0q2o1uoghstxt5cb8rc62m3xp5zlakfzrof2spxfu365dd9o29zwz7t6ftpkjg9yzbnitxi0osvx2pv53h4nx02icob3rx1u1y3okz5p2ia97vxrw71hwadgyboa6k2fwcqnwdjqi9woq3bjvcvcl3wzv3xp5nhw14a827p9yyc3tdtuboaj9lj5gx126xv4gmhxwerkelddnlhh4ubbjyg8jll4tvijv39qruprbbjldms9imicyyeukpaf195378pwh2yvgou6lsl9714gu3j14tqvoc2kbn647c6ldvohby3z0ui4vtl7oulvsksv9ssaqmt6g9tyo7ra2224q6tkg3sqlukr8p0bsc6xpofuaqepfu0yyo5blq38894fifkg77if5no664qwq3vvdzl46e48fqtc1dorzhgfdabr0rv6aryjpb6kr8ytp3skmmfihjj0g45ypt53476yj1hrhhcu03tethjkyreqqilu3d47eyrcfvdqhvalqkdajv0svevm4orkee08gqigtb3h9xzoq95apkuxarzz6e1usg6bd3qjpabq6148xnn2pt8ijmls3f48hakjjlmay71a9ztrwzw4z59j3c5rnv14rd01qlpjm83xjdnntrb8afsbf3em0kan9lt45wdp2sretqcj4s4ub6z36oqoylx9cosq5u6b1iua0rpjddqpdyz4n6ue2bp8s5kw6jhyfektgy7jhh39o5ps9j0okpvmpqmplf8lgzykxav63adta2ir235kypnwymw9vc1r800kc0if6hak3qv2bkvxf0uehxe6t2rv44cjal2y4r40ijrtfomeegmxrxr45chc9rsy7azry1ex13d6igtfnw776psnr3p6ua24ycm0p8ay7dv8wy86fkz9jai87drp8x7vtugytn52awm1e34r4ct67hi6vukodu3dsu7p4ogjp3c345qdnkaf0mogr18squ01ffj9cjfiuvck9v63dic0bp3p5lnmcvouh69q2y79vzkneqws2lhhfk2llftc9zuzycwhf9rinng1o68hhkmbubkx4vsxho98p3ypism1uvhc3e3ex8gxxjt4notccctt7671ueqvd6su7kh5rtcv20ycnrvc1kp4hap5sdcrcopvmbbpipbe0ggmeug02nxma042t36s6d8h0ztrdfladq9qdkbx207ndtfeg6d3zhiqwhwbcrt2w2rm4x4hq8fkn7uyvcxi397qbgfrqiolfkqv814ucp8mbj7c03xpk5tbrive8gh9z1pt3ug3q5czn3klrurbwf53sbvs5973rm1r0rc6r566n6oikyzwus1hw1rb6ojcax98ktd3ugpgxygg0dxo0pn5r75zuf349vr32zmtblcsc78h40tdgdwf1kyi3s0shnq76ee4e39hj2jom6xhcvwddmoy9l8ns6jupw9ipcgr4226b38tm6dq0g4dg3e8vxnwsj0so4t3f26o4rxeyuoxhm0j6nmty7rmrkqx3quro5fj58474jgdcubaeu0a6rt2mjb6qtuy7tdyyc8iw65anjjf8skx8iwe7t3tg96hnwtiov0v91n1ld9zaufu7lmhul4kx1s6qxha9io5ped99yq54s773z8bd400bky80v0cp5no09pkh3lm7rfghg8e7t2n2gx447arcklal7orphmv4xfgf5z2trlop4xdg3lag05yc7j68lp9o7m0ptgj1lstliu1fc8fibp5ipwk22yl3pq98g57quf0nttwckt2yrdzo9s3o82j53y5gm3usfkwqyanezb3wewtja5qy7cj85r73gvl30axgpeofy2ovdnkrhn71soi21dblqvsgfq4lg22l8se4kg0uanrd6w1b3f6f4vcrh14qk1bhxui4r71ro3a5hb2se20ykcyv8wglwoaekdd9255q3cxbqr36bsb0jayy6ey2jon3txlie8gr8ipdy7lcdm1y3fmx3 == \j\q\p\j\z\m\m\d\0\d\u\e\8\s\2\p\n\l\1\w\x\y\0\g\g\n\i\5\4\8\d\0\3\p\r\p\7\z\q\a\d\v\b\i\3\8\a\e\3\s\a\w\k\d\q\q\v\p\4\s\7\b\w\g\s\5\7\d\j\i\o\d\n\k\0\s\t\e\8\h\q\o\p\a\5\n\o\6\5\7\q\e\b\p\c\b\4\4\u\h\n\w\5\4\0\7\n\k\2\8\o\l\4\b\v\g\0\v\s\z\l\n\s\o\w\i\v\6\b\v\n\8\w\7\9\o\l\q\e\y\6\g\b\v\v\v\7\u\0\d\i\7\6\k\4\t\6\q\n\4\6\a\0\w\w\s\s\0\z\6\k\g\o\k\e\d\t\7\j\u\9\n\5\l\6\y\8\t\k\i\g\r\j\x\w\f\w\w\u\s\5\4\0\d\y\n\n\e\e\5\b\l\d\b\z\m\2\j\s\z\v\8\u\9\y\8\k\3\q\6\r\e\r\y\y\h\6\t\r\1\p\9\e\f\0\b\m\t\l\f\5\r\g\p\o\f\v\z\2\2\g\7\8\z\7\q\b\k\j\d\h\m\5\3\d\e\5\z\i\6\5\x\9\t\4\o\1\c\h\y\k\0\8\e\l\n\8\r\v\i\p\m\6\s\3\b\b\s\y\0\3\8\p\y\d\g\p\7\e\7\h\x\g\g\8\a\w\d\6\p\g\v\c\c\o\0\a\s\a\n\f\o\l\w\7\q\1\w\m\q\f\z\f\9\e\r\k\j\v\v\t\4\v\3\s\3\c\l\j\4\u\6\g\d\z\a\4\i\u\x\5\t\r\h\4\m\1\y\v\3\7\r\2\3\2\y\s\7\i\6\g\x\k\n\0\x\o\7\6\8\2\0\5\j\y\8\g\h\x\l\b\k\m\t\v\w\3\l\q\m\q\h\3\4\3\r\s\2\r\v\k\7\l\4\a\a\i\1\z\5\q\t\o\5\i\z\b\k\o\o\5\b\z\v\0\8\0\d\8\e\z\c\8\4\b\c\9\x\0\r\w\2\r\q\x\2\v\g\3\b\9\n\9\i\4\n\k\n\f\i\d\5\9\5\v\5\q\9\2\s\2\w\j\f\4\h\j\t\j\d\a\z\e\c\j\8\w\8\y\f\l\p\o\7\9\g\6\d\x\8\2\p\6\v\f\g\v\b\3\v\b\i\2\s\4\4\t\w\n\v\v\a\d\f\5\h\w\9\f\y\j\7\3\p\h\g\7\j\h\7\s\y\h\a\3\6\h\d\5\0\f\x\0\i\k\6\7\o\l\u\s\0\e\z\p\x\5\e\f\k\q\m\w\z\1\q\7\e\m\s\3\t\l\w\9\p\g\c\h\5\8\a\k\l\6\s\a\l\0\l\n\v\z\x\k\a\8\x\d\w\x\y\v\m\c\6\y\f\v\5\7\l\p\w\s\g\k\7\8\t\u\2\i\7\t\m\j\g\5\j\s\k\g\a\z\i\2\i\c\w\l\k\n\w\5\z\p\5\n\t\y\j\e\v\h\l\i\h\k\l\6\5\h\2\5\f\a\j\7\g\s\9\p\9\s\w\3\z\3\t\y\1\4\j\i\f\x\f\v\2\x\p\g\3\e\h\w\g\x\6\6\q\y\h\l\3\s\m\u\x\8\w\2\8\7\w\m\3\e\h\f\x\z\b\y\t\0\w\y\x\z\e\j\i\w\f\4\k\3\z\v\n\m\1\6\c\e\t\n\x\w\d\k\j\t\s\w\9\z\a\a\e\t\m\2\a\5\r\m\f\5\r\1\k\1\7\k\w\d\c\i\t\h\g\3\m\f\7\s\3\q\s\i\o\s\4\v\6\m\4\o\q\8\h\6\w\b\i\d\s\c\7\1\g\k\1\n\q\u\9\b\p\o\g\j\d\j\v\x\e\y\g\y\8\c\d\x\1\5\2\o\f\s\x\3\8\6\b\o\i\w\4\k\5\s\a\w\5\b\x\c\8\0\c\u\v\o\e\r\n\9\r\v\8\3\k\q\q\2\a\m\z\g\g\4\x\q\y\s\y\0\c\t\n\p\9\9\d\h\w\b\q\k\z\a\z\h\t\2\m\s\z\t\1\d\8\m\4\u\g\t\o\v\7\5\a\s\9\u\x\o\6\i\f\6\3\7\b\c\u\m\a\s\p\2\o\3\l\u\i\n\e\t\b\0\3\i\r\4\p\q\5\f\i\h\i\1\7\4\u\c\3\0\5\j\9\q\e\j\t\j\v\r\x\a\s\j\d\2\a\9\t\7\d\8\p\9\l\1\n\i\l\6\1\5\v\i\1\x\7\b\5\q\p\m\m\s\5\m\m\v\w\e\c\z\h\4\p\j\6\9\e\j\8\r\t\d\c\9\p\3\0\7\r\4\5\b\b\s\w\3\4\u\8\m\7\0\e\p\w\f\u\z\k\6\9\q\m\g\4\0\y\z\f\u\a\y\e\1\v\8\8\r\8\y\o\b\j\n\6\6\6\8\5\z\j\g\8\k\6\1\r\j\u\2\l\0\x\i\x\4\2\1\t\1\9\8\w\w\y\6\2\j\g\6\q\2\m\5\y\6\d\u\p\h\l\i\j\n\r\j\o\i\0\z\k\1\m\k\i\2\z\3\m\d\8\a\9\o\t\9\6\e\n\1\h\5\j\p\d\v\z\6\w\p\9\n\0\h\y\6\t\d\j\m\7\f\8\7\1\e\c\l\r\a\t\9\h\5\j\4\s\d\y\i\n\p\x\d\3\e\4\4\v\6\v\6\a\x\u\t\m\1\x\x\d\w\h\z\t\a\z\a\c\2\h\7\i\x\k\o\7\x\u\h\l\t\a\y\d\4\8\9\e\8\q\o\a\w\x\5\1\2\g\l\o\i\5\2\9\1\4\z\6\d\b\3\b\y\8\z\w\g\v\a\f\7\e\e\7\d\r\f\d\t\f\2\7\s\w\h\d\m\b\k\y\b\j\o\q\k\l\m\f\w\f\2\k\z\7\s\4\i\2\r\7\n\4\2\i\m\1\e\x\g\k\f\w\6\v\z\z\t\z\6\s\b\6\o\w\9\e\d\y\j\7\8\7\m\3\9\2\y\m\j\x\l\w\q\a\k\3\f\m\a\u\j\2\1\i\9\g\f\h\9\g\k\s\t\e\u\6\9\h\j\c\c\f\s\r\l\k\8\5\9\z\7\z\v\m\d\x\q\7\t\4\y\o\i\2\m\7\w\9\r\g\x\o\v\s\0\s\7\9\7\p\6\p\1\i\s\q\o\g\o\1\5\v\y\p\3\q\g\n\9\p\2\5\z\9\4\5\a\h\i\l\g\c\s\d\d\u\3\y\u\v\y\n\z\u\e\4\o\g\y\0\v\a\3\v\0\l\o\l\8\v\q\y\h\p\1\9\s\q\r\i\c\q\g\c\h\l\t\3\w\a\8\u\m\k\r\s\w\9\p\6\4\8\s\t\o\z\d\h\c\b\u\6\m\1\w\p\w\5\9\w\2\t\u\b\h\t\u\w\q\j\6\d\9\m\a\l\2\j\b\o\6\9\5\4\i\5\p\9\e\5\s\c\e\4\9\6\6\e\2\e\r\r\a\v\z\a\z\h\8\5\c\n\j\2\h\k\a\v\4\w\m\t\f\i\q\3\y\b\5\i\u\f\a\e\1\e\y\g\z\o\n\n\v\s\p\o\j\n\g\h\v\l\g\v\t\a\w\i\5\g\e\j\d\k\k\c\k\w\y\m\n\l\y\z\d\l\u\7\x\c\i\h\5\6\e\w\i\f\l\1\g\r\e\f\h\n\m\1\n\7\g\v\4\f\f\q\3\1\o\c\s\4\m\2\0\i\5\s\2\y\1\p\5\b\1\d\z\g\3\4\q\5\r\1\m\q\2\w\4\a\x\r\u\t\6\b\t\r\x\9\0\j\d\y\5\k\8\a\v\n\j\k\b\e\g\f\i\s\h\e\u\x\f\8\c\n\h\9\e\8\0\a\f\t\w\4\q\r\0\t\k\l\y\e\v\z\1\v\4\q\l\m\i\q\2\t\o\0\k\0\c\4\a\b\v\q\x\0\4\u\q\w\a\4\z\x\5\g\d\a\p\n\y\2\b\8\y\7\u\q\i\f\g\x\5\3\8\y\b\c\4\z\2\v\a\q\2\v\e\8\1\x\d\j\8\i\0\7\s\7\z\2\v\z\8\f\s\a\6\0\5\i\j\g\e\p\b\w\a\5\o\g\p\3\o\5\1\j\d\6\4\p\d\p\l\c\l\c\q\r\c\g\u\i\9\q\z\5\7\c\7\n\f\r\e\q\p\u\2\o\g\7\m\7\v\z\e\1\b\f\h\b\w\5\u\p\5\v\3\n\s\p\w\8\3\g\q\b\a\g\b\0\g\r\7\m\w\8\k\n\0\4\s\n\8\6\h\w\2\u\q\6\7\1\3\9\g\u\3\k\v\4\a\m\k\4\i\j\p\t\g\u\l\1\k\0\0\7\2\5\3\k\5\r\7\h\i\y\4\n\9\u\4\i\8\w\3\2\b\j\p\h\2\o\5\s\2\6\d\s\x\2\8\5\y\7\k\a\b\n\b\h\w\k\3\i\u\a\0\q\2\o\1\u\o\g\h\s\t\x\t\5\c\b\8\r\c\6\2\m\3\x\p\5\z\l\a\k\f\z\r\o\f\2\s\p\x\f\u\3\6\5\d\d\9\o\2\9\z\w\z\7\t\6\f\t\p\k\j\g\9\y\z\b\n\i\t\x\i\0\o\s\v\x\2\p\v\5\3\h\4\n\x\0\2\i\c\o\b\3\r\x\1\u\1\y\3\o\k\z\5\p\2\i\a\9\7\v\x\r\w\7\1\h\w\a\d\g\y\b\o\a\6\k\2\f\w\c\q\n\w\d\j\q\i\9\w\o\q\3\b\j\v\c\v\c\l\3\w\z\v\3\x\p\5\n\h\w\1\4\a\8\2\7\p\9\y\y\c\3\t\d\t\u\b\o\a\j\9\l\j\5\g\x\1\2\6\x\v\4\g\m\h\x\w\e\r\k\e\l\d\d\n\l\h\h\4\u\b\b\j\y\g\8\j\l\l\4\t\v\i\j\v\3\9\q\r\u\p\r\b\b\j\l\d\m\s\9\i\m\i\c\y\y\e\u\k\p\a\f\1\9\5\3\7\8\p\w\h\2\y\v\g\o\u\6\l\s\l\9\7\1\4\g\u\3\j\1\4\t\q\v\o\c\2\k\b\n\6\4\7\c\6\l\d\v\o\h\b\y\3\z\0\u\i\4\v\t\l\7\o\u\l\v\s\k\s\v\9\s\s\a\q\m\t\6\g\9\t\y\o\7\r\a\2\2\2\4\q\6\t\k\g\3\s\q\l\u\k\r\8\p\0\b\s\c\6\x\p\o\f\u\a\q\e\p\f\u\0\y\y\o\5\b\l\q\3\8\8\9\4\f\i\f\k\g\7\7\i\f\5\n\o\6\6\4\q\w\q\3\v\v\d\z\l\4\6\e\4\8\f\q\t\c\1\d\o\r\z\h\g\f\d\a\b\r\0\r\v\6\a\r\y\j\p\b\6\k\r\8\y\t\p\3\s\k\m\m\f\i\h\j\j\0\g\4\5\y\p\t\5\3\4\7\6\y\j\1\h\r\h\h\c\u\0\3\t\e\t\h\j\k\y\r\e\q\q\i\l\u\3\d\4\7\e\y\r\c\f\v\d\q\h\v\a\l\q\k\d\a\j\v\0\s\v\e\v\m\4\o\r\k\e\e\0\8\g\q\i\g\t\b\3\h\9\x\z\o\q\9\5\a\p\k\u\x\a\r\z\z\6\e\1\u\s\g\6\b\d\3\q\j\p\a\b\q\6\1\4\8\x\n\n\2\p\t\8\i\j\m\l\s\3\f\4\8\h\a\k\j\j\l\m\a\y\7\1\a\9\z\t\r\w\z\w\4\z\5\9\j\3\c\5\r\n\v\1\4\r\d\0\1\q\l\p\j\m\8\3\x\j\d\n\n\t\r\b\8\a\f\s\b\f\3\e\m\0\k\a\n\9\l\t\4\5\w\d\p\2\s\r\e\t\q\c\j\4\s\4\u\b\6\z\3\6\o\q\o\y\l\x\9\c\o\s\q\5\u\6\b\1\i\u\a\0\r\p\j\d\d\q\p\d\y\z\4\n\6\u\e\2\b\p\8\s\5\k\w\6\j\h\y\f\e\k\t\g\y\7\j\h\h\3\9\o\5\p\s\9\j\0\o\k\p\v\m\p\q\m\p\l\f\8\l\g\z\y\k\x\a\v\6\3\a\d\t\a\2\i\r\2\3\5\k\y\p\n\w\y\m\w\9\v\c\1\r\8\0\0\k\c\0\i\f\6\h\a\k\3\q\v\2\b\k\v\x\f\0\u\e\h\x\e\6\t\2\r\v\4\4\c\j\a\l\2\y\4\r\4\0\i\j\r\t\f\o\m\e\e\g\m\x\r\x\r\4\5\c\h\c\9\r\s\y\7\a\z\r\y\1\e\x\1\3\d\6\i\g\t\f\n\w\7\7\6\p\s\n\r\3\p\6\u\a\2\4\y\c\m\0\p\8\a\y\7\d\v\8\w\y\8\6\f\k\z\9\j\a\i\8\7\d\r\p\8\x\7\v\t\u\g\y\t\n\5\2\a\w\m\1\e\3\4\r\4\c\t\6\7\h\i\6\v\u\k\o\d\u\3\d\s\u\7\p\4\o\g\j\p\3\c\3\4\5\q\d\n\k\a\f\0\m\o\g\r\1\8\s\q\u\0\1\f\f\j\9\c\j\f\i\u\v\c\k\9\v\6\3\d\i\c\0\b\p\3\p\5\l\n\m\c\v\o\u\h\6\9\q\2\y\7\9\v\z\k\n\e\q\w\s\2\l\h\h\f\k\2\l\l\f\t\c\9\z\u\z\y\c\w\h\f\9\r\i\n\n\g\1\o\6\8\h\h\k\m\b\u\b\k\x\4\v\s\x\h\o\9\8\p\3\y\p\i\s\m\1\u\v\h\c\3\e\3\e\x\8\g\x\x\j\t\4\n\o\t\c\c\c\t\t\7\6\7\1\u\e\q\v\d\6\s\u\7\k\h\5\r\t\c\v\2\0\y\c\n\r\v\c\1\k\p\4\h\a\p\5\s\d\c\r\c\o\p\v\m\b\b\p\i\p\b\e\0\g\g\m\e\u\g\0\2\n\x\m\a\0\4\2\t\3\6\s\6\d\8\h\0\z\t\r\d\f\l\a\d\q\9\q\d\k\b\x\2\0\7\n\d\t\f\e\g\6\d\3\z\h\i\q\w\h\w\b\c\r\t\2\w\2\r\m\4\x\4\h\q\8\f\k\n\7\u\y\v\c\x\i\3\9\7\q\b\g\f\r\q\i\o\l\f\k\q\v\8\1\4\u\c\p\8\m\b\j\7\c\0\3\x\p\k\5\t\b\r\i\v\e\8\g\h\9\z\1\p\t\3\u\g\3\q\5\c\z\n\3\k\l\r\u\r\b\w\f\5\3\s\b\v\s\5\9\7\3\r\m\1\r\0\r\c\6\r\5\6\6\n\6\o\i\k\y\z\w\u\s\1\h\w\1\r\b\6\o\j\c\a\x\9\8\k\t\d\3\u\g\p\g\x\y\g\g\0\d\x\o\0\p\n\5\r\7\5\z\u\f\3\4\9\v\r\3\2\z\m\t\b\l\c\s\c\7\8\h\4\0\t\d\g\d\w\f\1\k\y\i\3\s\0\s\h\n\q\7\6\e\e\4\e\3\9\h\j\2\j\o\m\6\x\h\c\v\w\d\d\m\o\y\9\l\8\n\s\6\j\u\p\w\9\i\p\c\g\r\4\2\2\6\b\3\8\t\m\6\d\q\0\g\4\d\g\3\e\8\v\x\n\w\s\j\0\s\o\4\t\3\f\2\6\o\4\r\x\e\y\u\o\x\h\m\0\j\6\n\m\t\y\7\r\m\r\k\q\x\3\q\u\r\o\5\f\j\5\8\4\7\4\j\g\d\c\u\b\a\e\u\0\a\6\r\t\2\m\j\b\6\q\t\u\y\7\t\d\y\y\c\8\i\w\6\5\a\n\j\j\f\8\s\k\x\8\i\w\e\7\t\3\t\g\9\6\h\n\w\t\i\o\v\0\v\9\1\n\1\l\d\9\z\a\u\f\u\7\l\m\h\u\l\4\k\x\1\s\6\q\x\h\a\9\i\o\5\p\e\d\9\9\y\q\5\4\s\7\7\3\z\8\b\d\4\0\0\b\k\y\8\0\v\0\c\p\5\n\o\0\9\p\k\h\3\l\m\7\r\f\g\h\g\8\e\7\t\2\n\2\g\x\4\4\7\a\r\c\k\l\a\l\7\o\r\p\h\m\v\4\x\f\g\f\5\z\2\t\r\l\o\p\4\x\d\g\3\l\a\g\0\5\y\c\7\j\6\8\l\p\9\o\7\m\0\p\t\g\j\1\l\s\t\l\i\u\1\f\c\8\f\i\b\p\5\i\p\w\k\2\2\y\l\3\p\q\9\8\g\5\7\q\u\f\0\n\t\t\w\c\k\t\2\y\r\d\z\o\9\s\3\o\8\2\j\5\3\y\5\g\m\3\u\s\f\k\w\q\y\a\n\e\z\b\3\w\e\w\t\j\a\5\q\y\7\c\j\8\5\r\7\3\g\v\l\3\0\a\x\g\p\e\o\f\y\2\o\v\d\n\k\r\h\n\7\1\s\o\i\2\1\d\b\l\q\v\s\g\f\q\4\l\g\2\2\l\8\s\e\4\k\g\0\u\a\n\r\d\6\w\1\b\3\f\6\f\4\v\c\r\h\1\4\q\k\1\b\h\x\u\i\4\r\7\1\r\o\3\a\5\h\b\2\s\e\2\0\y\k\c\y\v\8\w\g\l\w\o\a\e\k\d\d\9\2\5\5\q\3\c\x\b\q\r\3\6\b\s\b\0\j\a\y\y\6\e\y\2\j\o\n\3\t\x\l\i\e\8\g\r\8\i\p\d\y\7\l\c\d\m\1\y\3\f\m\x\3 ]] 00:06:20.725 ************************************ 00:06:20.725 END TEST dd_rw_offset 00:06:20.725 ************************************ 00:06:20.725 00:06:20.725 real 0m1.472s 00:06:20.725 user 0m1.020s 00:06:20.725 sys 0m0.625s 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.725 11:30:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.725 [2024-07-12 11:30:24.041867] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:20.725 [2024-07-12 11:30:24.041960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62952 ] 00:06:20.725 { 00:06:20.725 "subsystems": [ 00:06:20.725 { 00:06:20.725 "subsystem": "bdev", 00:06:20.725 "config": [ 00:06:20.725 { 00:06:20.725 "params": { 00:06:20.725 "trtype": "pcie", 00:06:20.725 "traddr": "0000:00:10.0", 00:06:20.725 "name": "Nvme0" 00:06:20.725 }, 00:06:20.725 "method": "bdev_nvme_attach_controller" 00:06:20.725 }, 00:06:20.725 { 00:06:20.725 "method": "bdev_wait_for_examine" 00:06:20.725 } 00:06:20.725 ] 00:06:20.725 } 00:06:20.725 ] 00:06:20.725 } 00:06:20.984 [2024-07-12 11:30:24.181434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.984 [2024-07-12 11:30:24.324753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.984 [2024-07-12 11:30:24.380753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.500  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:21.500 00:06:21.501 11:30:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.501 ************************************ 00:06:21.501 END TEST spdk_dd_basic_rw 00:06:21.501 ************************************ 00:06:21.501 00:06:21.501 real 0m19.431s 00:06:21.501 user 0m14.200s 00:06:21.501 sys 0m6.777s 00:06:21.501 11:30:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.501 11:30:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.501 11:30:24 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:21.501 11:30:24 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:21.501 11:30:24 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.501 11:30:24 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.501 11:30:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:21.501 ************************************ 00:06:21.501 START TEST spdk_dd_posix 00:06:21.501 ************************************ 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:21.501 * Looking for test storage... 00:06:21.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:21.501 * First test run, liburing in use 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.501 ************************************ 00:06:21.501 START TEST dd_flag_append 00:06:21.501 ************************************ 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=cebiwhnc4mzszda20oxdsgnowdnqjekc 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=zfuqitya5a1rgn6lhqzq4ocuvlmxr4bj 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s cebiwhnc4mzszda20oxdsgnowdnqjekc 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s zfuqitya5a1rgn6lhqzq4ocuvlmxr4bj 00:06:21.501 11:30:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:21.501 [2024-07-12 11:30:24.922763] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:21.501 [2024-07-12 11:30:24.922877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63010 ] 00:06:21.760 [2024-07-12 11:30:25.051247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.760 [2024-07-12 11:30:25.175400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.029 [2024-07-12 11:30:25.229934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.288  Copying: 32/32 [B] (average 31 kBps) 00:06:22.288 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ zfuqitya5a1rgn6lhqzq4ocuvlmxr4bjcebiwhnc4mzszda20oxdsgnowdnqjekc == \z\f\u\q\i\t\y\a\5\a\1\r\g\n\6\l\h\q\z\q\4\o\c\u\v\l\m\x\r\4\b\j\c\e\b\i\w\h\n\c\4\m\z\s\z\d\a\2\0\o\x\d\s\g\n\o\w\d\n\q\j\e\k\c ]] 00:06:22.288 00:06:22.288 real 0m0.629s 00:06:22.288 user 0m0.376s 00:06:22.288 sys 0m0.274s 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:22.288 ************************************ 00:06:22.288 END TEST dd_flag_append 00:06:22.288 ************************************ 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:22.288 ************************************ 00:06:22.288 START TEST dd_flag_directory 00:06:22.288 ************************************ 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:22.288 11:30:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.288 [2024-07-12 11:30:25.611816] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:22.288 [2024-07-12 11:30:25.612653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63044 ] 00:06:22.546 [2024-07-12 11:30:25.752496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.546 [2024-07-12 11:30:25.865427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.546 [2024-07-12 11:30:25.920263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.546 [2024-07-12 11:30:25.955907] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.546 [2024-07-12 11:30:25.955960] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.546 [2024-07-12 11:30:25.955987] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.803 [2024-07-12 11:30:26.071455] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:22.803 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.803 [2024-07-12 11:30:26.233345] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:22.803 [2024-07-12 11:30:26.233717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63054 ] 00:06:23.060 [2024-07-12 11:30:26.376261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.060 [2024-07-12 11:30:26.493412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.318 [2024-07-12 11:30:26.549287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.318 [2024-07-12 11:30:26.586282] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:23.318 [2024-07-12 11:30:26.586336] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:23.318 [2024-07-12 11:30:26.586366] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.318 [2024-07-12 11:30:26.703688] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:23.577 ************************************ 00:06:23.577 END TEST dd_flag_directory 00:06:23.577 ************************************ 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.577 00:06:23.577 real 0m1.256s 00:06:23.577 user 0m0.735s 00:06:23.577 sys 0m0.308s 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:23.577 ************************************ 00:06:23.577 START TEST dd_flag_nofollow 00:06:23.577 ************************************ 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:23.577 11:30:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.577 [2024-07-12 11:30:26.915908] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:23.577 [2024-07-12 11:30:26.916016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63082 ] 00:06:23.835 [2024-07-12 11:30:27.048350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.835 [2024-07-12 11:30:27.167692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.835 [2024-07-12 11:30:27.222391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.835 [2024-07-12 11:30:27.257535] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:23.835 [2024-07-12 11:30:27.257621] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:23.835 [2024-07-12 11:30:27.257639] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.094 [2024-07-12 11:30:27.376929] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:24.094 11:30:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:24.352 [2024-07-12 11:30:27.544563] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:24.352 [2024-07-12 11:30:27.544760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63092 ] 00:06:24.352 [2024-07-12 11:30:27.688672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.611 [2024-07-12 11:30:27.802418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.611 [2024-07-12 11:30:27.857591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.611 [2024-07-12 11:30:27.892338] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:24.611 [2024-07-12 11:30:27.892393] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:24.611 [2024-07-12 11:30:27.892425] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.611 [2024-07-12 11:30:28.007472] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:24.869 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.869 [2024-07-12 11:30:28.164716] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:24.869 [2024-07-12 11:30:28.165033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63105 ] 00:06:24.869 [2024-07-12 11:30:28.299595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.128 [2024-07-12 11:30:28.412506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.128 [2024-07-12 11:30:28.469317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.386  Copying: 512/512 [B] (average 500 kBps) 00:06:25.386 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 7ks180kp4lf27x756oc5lmnum790rzonc7g05mk0ppwt8s3fgeo65n5zpvtiz3iix4rmfjrydj4psp4ztvvdm1x6igiafs1qvb5vx7iz9rxqitnrvxwssyasnekpuk4ghx1r7nanb0htlehsjtp1794nuzqi3qhqwhc4tc6hgshu0ivmg9v6vmyxf1bkt94q2c0thm0kn63h151y5xqzkrpa8jk5j8e64ykfexyihthkw8l6kbqrg54udw8xy5x519wxfo74f0wzqhnuuc2q5ll8w9qpeylt8jit1vilriw7iuuhgljpj27mdy1signte81zeg40pdmubndqj09ox43ts1qpid88vtzweh7g93x65cj96zgxnixmf2zkiohuroycv2d4j1jlfetpakdezvp4xcp037ed10zkrqo9vivnzckvayu770mid0exo9v5kbqs5sqwjo9lkfoo0u800t45l9ejgbhon35gmcqw5d0rpuxa9zn4cz98nk2rt9ih == \7\k\s\1\8\0\k\p\4\l\f\2\7\x\7\5\6\o\c\5\l\m\n\u\m\7\9\0\r\z\o\n\c\7\g\0\5\m\k\0\p\p\w\t\8\s\3\f\g\e\o\6\5\n\5\z\p\v\t\i\z\3\i\i\x\4\r\m\f\j\r\y\d\j\4\p\s\p\4\z\t\v\v\d\m\1\x\6\i\g\i\a\f\s\1\q\v\b\5\v\x\7\i\z\9\r\x\q\i\t\n\r\v\x\w\s\s\y\a\s\n\e\k\p\u\k\4\g\h\x\1\r\7\n\a\n\b\0\h\t\l\e\h\s\j\t\p\1\7\9\4\n\u\z\q\i\3\q\h\q\w\h\c\4\t\c\6\h\g\s\h\u\0\i\v\m\g\9\v\6\v\m\y\x\f\1\b\k\t\9\4\q\2\c\0\t\h\m\0\k\n\6\3\h\1\5\1\y\5\x\q\z\k\r\p\a\8\j\k\5\j\8\e\6\4\y\k\f\e\x\y\i\h\t\h\k\w\8\l\6\k\b\q\r\g\5\4\u\d\w\8\x\y\5\x\5\1\9\w\x\f\o\7\4\f\0\w\z\q\h\n\u\u\c\2\q\5\l\l\8\w\9\q\p\e\y\l\t\8\j\i\t\1\v\i\l\r\i\w\7\i\u\u\h\g\l\j\p\j\2\7\m\d\y\1\s\i\g\n\t\e\8\1\z\e\g\4\0\p\d\m\u\b\n\d\q\j\0\9\o\x\4\3\t\s\1\q\p\i\d\8\8\v\t\z\w\e\h\7\g\9\3\x\6\5\c\j\9\6\z\g\x\n\i\x\m\f\2\z\k\i\o\h\u\r\o\y\c\v\2\d\4\j\1\j\l\f\e\t\p\a\k\d\e\z\v\p\4\x\c\p\0\3\7\e\d\1\0\z\k\r\q\o\9\v\i\v\n\z\c\k\v\a\y\u\7\7\0\m\i\d\0\e\x\o\9\v\5\k\b\q\s\5\s\q\w\j\o\9\l\k\f\o\o\0\u\8\0\0\t\4\5\l\9\e\j\g\b\h\o\n\3\5\g\m\c\q\w\5\d\0\r\p\u\x\a\9\z\n\4\c\z\9\8\n\k\2\r\t\9\i\h ]] 00:06:25.386 00:06:25.386 real 0m1.882s 00:06:25.386 user 0m1.111s 00:06:25.386 sys 0m0.590s 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.386 ************************************ 00:06:25.386 END TEST dd_flag_nofollow 00:06:25.386 ************************************ 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:25.386 ************************************ 00:06:25.386 START TEST dd_flag_noatime 00:06:25.386 ************************************ 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720783828 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720783828 00:06:25.386 11:30:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:26.405 11:30:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.663 [2024-07-12 11:30:29.874980] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:26.663 [2024-07-12 11:30:29.875303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63147 ] 00:06:26.663 [2024-07-12 11:30:30.017452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.921 [2024-07-12 11:30:30.122108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.921 [2024-07-12 11:30:30.184701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.180  Copying: 512/512 [B] (average 500 kBps) 00:06:27.180 00:06:27.180 11:30:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.180 11:30:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720783828 )) 00:06:27.180 11:30:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.180 11:30:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720783828 )) 00:06:27.180 11:30:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.180 [2024-07-12 11:30:30.492542] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:27.180 [2024-07-12 11:30:30.492659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63161 ] 00:06:27.180 [2024-07-12 11:30:30.622951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.437 [2024-07-12 11:30:30.736431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.437 [2024-07-12 11:30:30.789282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.696  Copying: 512/512 [B] (average 500 kBps) 00:06:27.696 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720783830 )) 00:06:27.696 00:06:27.696 real 0m2.247s 00:06:27.696 user 0m0.707s 00:06:27.696 sys 0m0.587s 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:27.696 ************************************ 00:06:27.696 END TEST dd_flag_noatime 00:06:27.696 ************************************ 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.696 ************************************ 00:06:27.696 START TEST dd_flags_misc 00:06:27.696 ************************************ 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.696 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:27.953 [2024-07-12 11:30:31.155465] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:27.953 [2024-07-12 11:30:31.155574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63195 ] 00:06:27.953 [2024-07-12 11:30:31.293005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.211 [2024-07-12 11:30:31.408494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.211 [2024-07-12 11:30:31.461850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.469  Copying: 512/512 [B] (average 500 kBps) 00:06:28.469 00:06:28.470 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pnk21ajafhwbycjf8mtgl3yhei8c0jht9o8iny1owkd00g0win0jdsxj6hvwjh31nwtr504sdfllpqfuzo5r312xea34ogt0aydr0w5q6y1kp4uo4psokey9nnwa2rksxdi90tyfia4av447wfvyn2i82y0f59yjwl0cfisk5w689tjmfi9c1cdlfcpxdhuqxn3cj7muaehug920i0kb47txuz2prib2zkkvwt8ixfnjgyyo3bf3ll7pkdeoq4bqpqjsssdlrhl6kx9hr6qxhnbuapebskne9tdfmy8sobkeelnsprflq589iuopzil3wgkpsfpcmacmao8j4p6ojq7egekpuczsm2m933mxeryrh601v5dtmqr9grmuptgyn1kwpqeumblulg8adpse7hc648ln0cnhqleio76u001gdqk7tpcfzj0qzkbysq2ysuwjf479lkc4nqmex93ft088qxxwunii7d4tgaktuuzsxt9dutfr9ut3db9bgp8k == \p\n\k\2\1\a\j\a\f\h\w\b\y\c\j\f\8\m\t\g\l\3\y\h\e\i\8\c\0\j\h\t\9\o\8\i\n\y\1\o\w\k\d\0\0\g\0\w\i\n\0\j\d\s\x\j\6\h\v\w\j\h\3\1\n\w\t\r\5\0\4\s\d\f\l\l\p\q\f\u\z\o\5\r\3\1\2\x\e\a\3\4\o\g\t\0\a\y\d\r\0\w\5\q\6\y\1\k\p\4\u\o\4\p\s\o\k\e\y\9\n\n\w\a\2\r\k\s\x\d\i\9\0\t\y\f\i\a\4\a\v\4\4\7\w\f\v\y\n\2\i\8\2\y\0\f\5\9\y\j\w\l\0\c\f\i\s\k\5\w\6\8\9\t\j\m\f\i\9\c\1\c\d\l\f\c\p\x\d\h\u\q\x\n\3\c\j\7\m\u\a\e\h\u\g\9\2\0\i\0\k\b\4\7\t\x\u\z\2\p\r\i\b\2\z\k\k\v\w\t\8\i\x\f\n\j\g\y\y\o\3\b\f\3\l\l\7\p\k\d\e\o\q\4\b\q\p\q\j\s\s\s\d\l\r\h\l\6\k\x\9\h\r\6\q\x\h\n\b\u\a\p\e\b\s\k\n\e\9\t\d\f\m\y\8\s\o\b\k\e\e\l\n\s\p\r\f\l\q\5\8\9\i\u\o\p\z\i\l\3\w\g\k\p\s\f\p\c\m\a\c\m\a\o\8\j\4\p\6\o\j\q\7\e\g\e\k\p\u\c\z\s\m\2\m\9\3\3\m\x\e\r\y\r\h\6\0\1\v\5\d\t\m\q\r\9\g\r\m\u\p\t\g\y\n\1\k\w\p\q\e\u\m\b\l\u\l\g\8\a\d\p\s\e\7\h\c\6\4\8\l\n\0\c\n\h\q\l\e\i\o\7\6\u\0\0\1\g\d\q\k\7\t\p\c\f\z\j\0\q\z\k\b\y\s\q\2\y\s\u\w\j\f\4\7\9\l\k\c\4\n\q\m\e\x\9\3\f\t\0\8\8\q\x\x\w\u\n\i\i\7\d\4\t\g\a\k\t\u\u\z\s\x\t\9\d\u\t\f\r\9\u\t\3\d\b\9\b\g\p\8\k ]] 00:06:28.470 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.470 11:30:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:28.470 [2024-07-12 11:30:31.750985] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:28.470 [2024-07-12 11:30:31.751076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63199 ] 00:06:28.470 [2024-07-12 11:30:31.887512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.727 [2024-07-12 11:30:31.996277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.727 [2024-07-12 11:30:32.049315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.985  Copying: 512/512 [B] (average 500 kBps) 00:06:28.985 00:06:28.985 11:30:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pnk21ajafhwbycjf8mtgl3yhei8c0jht9o8iny1owkd00g0win0jdsxj6hvwjh31nwtr504sdfllpqfuzo5r312xea34ogt0aydr0w5q6y1kp4uo4psokey9nnwa2rksxdi90tyfia4av447wfvyn2i82y0f59yjwl0cfisk5w689tjmfi9c1cdlfcpxdhuqxn3cj7muaehug920i0kb47txuz2prib2zkkvwt8ixfnjgyyo3bf3ll7pkdeoq4bqpqjsssdlrhl6kx9hr6qxhnbuapebskne9tdfmy8sobkeelnsprflq589iuopzil3wgkpsfpcmacmao8j4p6ojq7egekpuczsm2m933mxeryrh601v5dtmqr9grmuptgyn1kwpqeumblulg8adpse7hc648ln0cnhqleio76u001gdqk7tpcfzj0qzkbysq2ysuwjf479lkc4nqmex93ft088qxxwunii7d4tgaktuuzsxt9dutfr9ut3db9bgp8k == \p\n\k\2\1\a\j\a\f\h\w\b\y\c\j\f\8\m\t\g\l\3\y\h\e\i\8\c\0\j\h\t\9\o\8\i\n\y\1\o\w\k\d\0\0\g\0\w\i\n\0\j\d\s\x\j\6\h\v\w\j\h\3\1\n\w\t\r\5\0\4\s\d\f\l\l\p\q\f\u\z\o\5\r\3\1\2\x\e\a\3\4\o\g\t\0\a\y\d\r\0\w\5\q\6\y\1\k\p\4\u\o\4\p\s\o\k\e\y\9\n\n\w\a\2\r\k\s\x\d\i\9\0\t\y\f\i\a\4\a\v\4\4\7\w\f\v\y\n\2\i\8\2\y\0\f\5\9\y\j\w\l\0\c\f\i\s\k\5\w\6\8\9\t\j\m\f\i\9\c\1\c\d\l\f\c\p\x\d\h\u\q\x\n\3\c\j\7\m\u\a\e\h\u\g\9\2\0\i\0\k\b\4\7\t\x\u\z\2\p\r\i\b\2\z\k\k\v\w\t\8\i\x\f\n\j\g\y\y\o\3\b\f\3\l\l\7\p\k\d\e\o\q\4\b\q\p\q\j\s\s\s\d\l\r\h\l\6\k\x\9\h\r\6\q\x\h\n\b\u\a\p\e\b\s\k\n\e\9\t\d\f\m\y\8\s\o\b\k\e\e\l\n\s\p\r\f\l\q\5\8\9\i\u\o\p\z\i\l\3\w\g\k\p\s\f\p\c\m\a\c\m\a\o\8\j\4\p\6\o\j\q\7\e\g\e\k\p\u\c\z\s\m\2\m\9\3\3\m\x\e\r\y\r\h\6\0\1\v\5\d\t\m\q\r\9\g\r\m\u\p\t\g\y\n\1\k\w\p\q\e\u\m\b\l\u\l\g\8\a\d\p\s\e\7\h\c\6\4\8\l\n\0\c\n\h\q\l\e\i\o\7\6\u\0\0\1\g\d\q\k\7\t\p\c\f\z\j\0\q\z\k\b\y\s\q\2\y\s\u\w\j\f\4\7\9\l\k\c\4\n\q\m\e\x\9\3\f\t\0\8\8\q\x\x\w\u\n\i\i\7\d\4\t\g\a\k\t\u\u\z\s\x\t\9\d\u\t\f\r\9\u\t\3\d\b\9\b\g\p\8\k ]] 00:06:28.985 11:30:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.985 11:30:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:28.985 [2024-07-12 11:30:32.338441] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:28.985 [2024-07-12 11:30:32.338549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63214 ] 00:06:29.243 [2024-07-12 11:30:32.477324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.243 [2024-07-12 11:30:32.594419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.243 [2024-07-12 11:30:32.649653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.501  Copying: 512/512 [B] (average 250 kBps) 00:06:29.501 00:06:29.501 11:30:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pnk21ajafhwbycjf8mtgl3yhei8c0jht9o8iny1owkd00g0win0jdsxj6hvwjh31nwtr504sdfllpqfuzo5r312xea34ogt0aydr0w5q6y1kp4uo4psokey9nnwa2rksxdi90tyfia4av447wfvyn2i82y0f59yjwl0cfisk5w689tjmfi9c1cdlfcpxdhuqxn3cj7muaehug920i0kb47txuz2prib2zkkvwt8ixfnjgyyo3bf3ll7pkdeoq4bqpqjsssdlrhl6kx9hr6qxhnbuapebskne9tdfmy8sobkeelnsprflq589iuopzil3wgkpsfpcmacmao8j4p6ojq7egekpuczsm2m933mxeryrh601v5dtmqr9grmuptgyn1kwpqeumblulg8adpse7hc648ln0cnhqleio76u001gdqk7tpcfzj0qzkbysq2ysuwjf479lkc4nqmex93ft088qxxwunii7d4tgaktuuzsxt9dutfr9ut3db9bgp8k == \p\n\k\2\1\a\j\a\f\h\w\b\y\c\j\f\8\m\t\g\l\3\y\h\e\i\8\c\0\j\h\t\9\o\8\i\n\y\1\o\w\k\d\0\0\g\0\w\i\n\0\j\d\s\x\j\6\h\v\w\j\h\3\1\n\w\t\r\5\0\4\s\d\f\l\l\p\q\f\u\z\o\5\r\3\1\2\x\e\a\3\4\o\g\t\0\a\y\d\r\0\w\5\q\6\y\1\k\p\4\u\o\4\p\s\o\k\e\y\9\n\n\w\a\2\r\k\s\x\d\i\9\0\t\y\f\i\a\4\a\v\4\4\7\w\f\v\y\n\2\i\8\2\y\0\f\5\9\y\j\w\l\0\c\f\i\s\k\5\w\6\8\9\t\j\m\f\i\9\c\1\c\d\l\f\c\p\x\d\h\u\q\x\n\3\c\j\7\m\u\a\e\h\u\g\9\2\0\i\0\k\b\4\7\t\x\u\z\2\p\r\i\b\2\z\k\k\v\w\t\8\i\x\f\n\j\g\y\y\o\3\b\f\3\l\l\7\p\k\d\e\o\q\4\b\q\p\q\j\s\s\s\d\l\r\h\l\6\k\x\9\h\r\6\q\x\h\n\b\u\a\p\e\b\s\k\n\e\9\t\d\f\m\y\8\s\o\b\k\e\e\l\n\s\p\r\f\l\q\5\8\9\i\u\o\p\z\i\l\3\w\g\k\p\s\f\p\c\m\a\c\m\a\o\8\j\4\p\6\o\j\q\7\e\g\e\k\p\u\c\z\s\m\2\m\9\3\3\m\x\e\r\y\r\h\6\0\1\v\5\d\t\m\q\r\9\g\r\m\u\p\t\g\y\n\1\k\w\p\q\e\u\m\b\l\u\l\g\8\a\d\p\s\e\7\h\c\6\4\8\l\n\0\c\n\h\q\l\e\i\o\7\6\u\0\0\1\g\d\q\k\7\t\p\c\f\z\j\0\q\z\k\b\y\s\q\2\y\s\u\w\j\f\4\7\9\l\k\c\4\n\q\m\e\x\9\3\f\t\0\8\8\q\x\x\w\u\n\i\i\7\d\4\t\g\a\k\t\u\u\z\s\x\t\9\d\u\t\f\r\9\u\t\3\d\b\9\b\g\p\8\k ]] 00:06:29.501 11:30:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.501 11:30:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:29.759 [2024-07-12 11:30:32.969804] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:29.759 [2024-07-12 11:30:32.969927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63223 ] 00:06:29.759 [2024-07-12 11:30:33.109286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.018 [2024-07-12 11:30:33.228741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.018 [2024-07-12 11:30:33.282710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.275  Copying: 512/512 [B] (average 250 kBps) 00:06:30.275 00:06:30.275 11:30:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ pnk21ajafhwbycjf8mtgl3yhei8c0jht9o8iny1owkd00g0win0jdsxj6hvwjh31nwtr504sdfllpqfuzo5r312xea34ogt0aydr0w5q6y1kp4uo4psokey9nnwa2rksxdi90tyfia4av447wfvyn2i82y0f59yjwl0cfisk5w689tjmfi9c1cdlfcpxdhuqxn3cj7muaehug920i0kb47txuz2prib2zkkvwt8ixfnjgyyo3bf3ll7pkdeoq4bqpqjsssdlrhl6kx9hr6qxhnbuapebskne9tdfmy8sobkeelnsprflq589iuopzil3wgkpsfpcmacmao8j4p6ojq7egekpuczsm2m933mxeryrh601v5dtmqr9grmuptgyn1kwpqeumblulg8adpse7hc648ln0cnhqleio76u001gdqk7tpcfzj0qzkbysq2ysuwjf479lkc4nqmex93ft088qxxwunii7d4tgaktuuzsxt9dutfr9ut3db9bgp8k == \p\n\k\2\1\a\j\a\f\h\w\b\y\c\j\f\8\m\t\g\l\3\y\h\e\i\8\c\0\j\h\t\9\o\8\i\n\y\1\o\w\k\d\0\0\g\0\w\i\n\0\j\d\s\x\j\6\h\v\w\j\h\3\1\n\w\t\r\5\0\4\s\d\f\l\l\p\q\f\u\z\o\5\r\3\1\2\x\e\a\3\4\o\g\t\0\a\y\d\r\0\w\5\q\6\y\1\k\p\4\u\o\4\p\s\o\k\e\y\9\n\n\w\a\2\r\k\s\x\d\i\9\0\t\y\f\i\a\4\a\v\4\4\7\w\f\v\y\n\2\i\8\2\y\0\f\5\9\y\j\w\l\0\c\f\i\s\k\5\w\6\8\9\t\j\m\f\i\9\c\1\c\d\l\f\c\p\x\d\h\u\q\x\n\3\c\j\7\m\u\a\e\h\u\g\9\2\0\i\0\k\b\4\7\t\x\u\z\2\p\r\i\b\2\z\k\k\v\w\t\8\i\x\f\n\j\g\y\y\o\3\b\f\3\l\l\7\p\k\d\e\o\q\4\b\q\p\q\j\s\s\s\d\l\r\h\l\6\k\x\9\h\r\6\q\x\h\n\b\u\a\p\e\b\s\k\n\e\9\t\d\f\m\y\8\s\o\b\k\e\e\l\n\s\p\r\f\l\q\5\8\9\i\u\o\p\z\i\l\3\w\g\k\p\s\f\p\c\m\a\c\m\a\o\8\j\4\p\6\o\j\q\7\e\g\e\k\p\u\c\z\s\m\2\m\9\3\3\m\x\e\r\y\r\h\6\0\1\v\5\d\t\m\q\r\9\g\r\m\u\p\t\g\y\n\1\k\w\p\q\e\u\m\b\l\u\l\g\8\a\d\p\s\e\7\h\c\6\4\8\l\n\0\c\n\h\q\l\e\i\o\7\6\u\0\0\1\g\d\q\k\7\t\p\c\f\z\j\0\q\z\k\b\y\s\q\2\y\s\u\w\j\f\4\7\9\l\k\c\4\n\q\m\e\x\9\3\f\t\0\8\8\q\x\x\w\u\n\i\i\7\d\4\t\g\a\k\t\u\u\z\s\x\t\9\d\u\t\f\r\9\u\t\3\d\b\9\b\g\p\8\k ]] 00:06:30.275 11:30:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:30.275 11:30:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:30.275 11:30:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:30.275 11:30:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:30.276 11:30:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.276 11:30:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:30.276 [2024-07-12 11:30:33.593062] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:30.276 [2024-07-12 11:30:33.593193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63233 ] 00:06:30.534 [2024-07-12 11:30:33.731235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.534 [2024-07-12 11:30:33.828235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.534 [2024-07-12 11:30:33.883019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.792  Copying: 512/512 [B] (average 500 kBps) 00:06:30.792 00:06:30.792 11:30:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ v73h7uyh93230px8o6d8kbsllrbar4zmqlhgzetf84jknlsq3hl6eluqjba6ejafa2c82bqqz1rui6mikcok58dvhlta16bmf0tmpto3p2qj13d013lrahidl5btow5yjw61ptyy92vuunbahcsvsbiklugvyuto0jfw2ls274hbvw0shis9h38ah2reuloy3y90tp8j92fzkm8xgxd68gxksq8h28kn06jf8syg9e39hzz0mi61cui8ho6o4fs7cus9o8h9tiuj5rapczp290ghv24tu1xt4xgimtu001wxsr336k3gup53nduivkdlk5osk9v4fg76pmuwbhcsmgzs3y8xf1si85b875ht5avrp5dggc899ec0zm72lyygjarv90ex2zahms0zeegarllh5wt0lv2hiyk3v57m7yg55gkmt37jbt1znamm8rys5ulbaizdy123e61cedwb3bccpovsvxvki151dg2gv9s4einpr4aacig1qt9gbm6o == \v\7\3\h\7\u\y\h\9\3\2\3\0\p\x\8\o\6\d\8\k\b\s\l\l\r\b\a\r\4\z\m\q\l\h\g\z\e\t\f\8\4\j\k\n\l\s\q\3\h\l\6\e\l\u\q\j\b\a\6\e\j\a\f\a\2\c\8\2\b\q\q\z\1\r\u\i\6\m\i\k\c\o\k\5\8\d\v\h\l\t\a\1\6\b\m\f\0\t\m\p\t\o\3\p\2\q\j\1\3\d\0\1\3\l\r\a\h\i\d\l\5\b\t\o\w\5\y\j\w\6\1\p\t\y\y\9\2\v\u\u\n\b\a\h\c\s\v\s\b\i\k\l\u\g\v\y\u\t\o\0\j\f\w\2\l\s\2\7\4\h\b\v\w\0\s\h\i\s\9\h\3\8\a\h\2\r\e\u\l\o\y\3\y\9\0\t\p\8\j\9\2\f\z\k\m\8\x\g\x\d\6\8\g\x\k\s\q\8\h\2\8\k\n\0\6\j\f\8\s\y\g\9\e\3\9\h\z\z\0\m\i\6\1\c\u\i\8\h\o\6\o\4\f\s\7\c\u\s\9\o\8\h\9\t\i\u\j\5\r\a\p\c\z\p\2\9\0\g\h\v\2\4\t\u\1\x\t\4\x\g\i\m\t\u\0\0\1\w\x\s\r\3\3\6\k\3\g\u\p\5\3\n\d\u\i\v\k\d\l\k\5\o\s\k\9\v\4\f\g\7\6\p\m\u\w\b\h\c\s\m\g\z\s\3\y\8\x\f\1\s\i\8\5\b\8\7\5\h\t\5\a\v\r\p\5\d\g\g\c\8\9\9\e\c\0\z\m\7\2\l\y\y\g\j\a\r\v\9\0\e\x\2\z\a\h\m\s\0\z\e\e\g\a\r\l\l\h\5\w\t\0\l\v\2\h\i\y\k\3\v\5\7\m\7\y\g\5\5\g\k\m\t\3\7\j\b\t\1\z\n\a\m\m\8\r\y\s\5\u\l\b\a\i\z\d\y\1\2\3\e\6\1\c\e\d\w\b\3\b\c\c\p\o\v\s\v\x\v\k\i\1\5\1\d\g\2\g\v\9\s\4\e\i\n\p\r\4\a\a\c\i\g\1\q\t\9\g\b\m\6\o ]] 00:06:30.792 11:30:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.792 11:30:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:30.792 [2024-07-12 11:30:34.202611] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:30.792 [2024-07-12 11:30:34.203505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63248 ] 00:06:31.050 [2024-07-12 11:30:34.341270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.050 [2024-07-12 11:30:34.451936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.309 [2024-07-12 11:30:34.505802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.568  Copying: 512/512 [B] (average 500 kBps) 00:06:31.568 00:06:31.568 11:30:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ v73h7uyh93230px8o6d8kbsllrbar4zmqlhgzetf84jknlsq3hl6eluqjba6ejafa2c82bqqz1rui6mikcok58dvhlta16bmf0tmpto3p2qj13d013lrahidl5btow5yjw61ptyy92vuunbahcsvsbiklugvyuto0jfw2ls274hbvw0shis9h38ah2reuloy3y90tp8j92fzkm8xgxd68gxksq8h28kn06jf8syg9e39hzz0mi61cui8ho6o4fs7cus9o8h9tiuj5rapczp290ghv24tu1xt4xgimtu001wxsr336k3gup53nduivkdlk5osk9v4fg76pmuwbhcsmgzs3y8xf1si85b875ht5avrp5dggc899ec0zm72lyygjarv90ex2zahms0zeegarllh5wt0lv2hiyk3v57m7yg55gkmt37jbt1znamm8rys5ulbaizdy123e61cedwb3bccpovsvxvki151dg2gv9s4einpr4aacig1qt9gbm6o == \v\7\3\h\7\u\y\h\9\3\2\3\0\p\x\8\o\6\d\8\k\b\s\l\l\r\b\a\r\4\z\m\q\l\h\g\z\e\t\f\8\4\j\k\n\l\s\q\3\h\l\6\e\l\u\q\j\b\a\6\e\j\a\f\a\2\c\8\2\b\q\q\z\1\r\u\i\6\m\i\k\c\o\k\5\8\d\v\h\l\t\a\1\6\b\m\f\0\t\m\p\t\o\3\p\2\q\j\1\3\d\0\1\3\l\r\a\h\i\d\l\5\b\t\o\w\5\y\j\w\6\1\p\t\y\y\9\2\v\u\u\n\b\a\h\c\s\v\s\b\i\k\l\u\g\v\y\u\t\o\0\j\f\w\2\l\s\2\7\4\h\b\v\w\0\s\h\i\s\9\h\3\8\a\h\2\r\e\u\l\o\y\3\y\9\0\t\p\8\j\9\2\f\z\k\m\8\x\g\x\d\6\8\g\x\k\s\q\8\h\2\8\k\n\0\6\j\f\8\s\y\g\9\e\3\9\h\z\z\0\m\i\6\1\c\u\i\8\h\o\6\o\4\f\s\7\c\u\s\9\o\8\h\9\t\i\u\j\5\r\a\p\c\z\p\2\9\0\g\h\v\2\4\t\u\1\x\t\4\x\g\i\m\t\u\0\0\1\w\x\s\r\3\3\6\k\3\g\u\p\5\3\n\d\u\i\v\k\d\l\k\5\o\s\k\9\v\4\f\g\7\6\p\m\u\w\b\h\c\s\m\g\z\s\3\y\8\x\f\1\s\i\8\5\b\8\7\5\h\t\5\a\v\r\p\5\d\g\g\c\8\9\9\e\c\0\z\m\7\2\l\y\y\g\j\a\r\v\9\0\e\x\2\z\a\h\m\s\0\z\e\e\g\a\r\l\l\h\5\w\t\0\l\v\2\h\i\y\k\3\v\5\7\m\7\y\g\5\5\g\k\m\t\3\7\j\b\t\1\z\n\a\m\m\8\r\y\s\5\u\l\b\a\i\z\d\y\1\2\3\e\6\1\c\e\d\w\b\3\b\c\c\p\o\v\s\v\x\v\k\i\1\5\1\d\g\2\g\v\9\s\4\e\i\n\p\r\4\a\a\c\i\g\1\q\t\9\g\b\m\6\o ]] 00:06:31.568 11:30:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.568 11:30:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:31.568 [2024-07-12 11:30:34.835080] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:31.568 [2024-07-12 11:30:34.835196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63252 ] 00:06:31.568 [2024-07-12 11:30:34.973442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.826 [2024-07-12 11:30:35.086710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.826 [2024-07-12 11:30:35.141483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.085  Copying: 512/512 [B] (average 500 kBps) 00:06:32.085 00:06:32.085 11:30:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ v73h7uyh93230px8o6d8kbsllrbar4zmqlhgzetf84jknlsq3hl6eluqjba6ejafa2c82bqqz1rui6mikcok58dvhlta16bmf0tmpto3p2qj13d013lrahidl5btow5yjw61ptyy92vuunbahcsvsbiklugvyuto0jfw2ls274hbvw0shis9h38ah2reuloy3y90tp8j92fzkm8xgxd68gxksq8h28kn06jf8syg9e39hzz0mi61cui8ho6o4fs7cus9o8h9tiuj5rapczp290ghv24tu1xt4xgimtu001wxsr336k3gup53nduivkdlk5osk9v4fg76pmuwbhcsmgzs3y8xf1si85b875ht5avrp5dggc899ec0zm72lyygjarv90ex2zahms0zeegarllh5wt0lv2hiyk3v57m7yg55gkmt37jbt1znamm8rys5ulbaizdy123e61cedwb3bccpovsvxvki151dg2gv9s4einpr4aacig1qt9gbm6o == \v\7\3\h\7\u\y\h\9\3\2\3\0\p\x\8\o\6\d\8\k\b\s\l\l\r\b\a\r\4\z\m\q\l\h\g\z\e\t\f\8\4\j\k\n\l\s\q\3\h\l\6\e\l\u\q\j\b\a\6\e\j\a\f\a\2\c\8\2\b\q\q\z\1\r\u\i\6\m\i\k\c\o\k\5\8\d\v\h\l\t\a\1\6\b\m\f\0\t\m\p\t\o\3\p\2\q\j\1\3\d\0\1\3\l\r\a\h\i\d\l\5\b\t\o\w\5\y\j\w\6\1\p\t\y\y\9\2\v\u\u\n\b\a\h\c\s\v\s\b\i\k\l\u\g\v\y\u\t\o\0\j\f\w\2\l\s\2\7\4\h\b\v\w\0\s\h\i\s\9\h\3\8\a\h\2\r\e\u\l\o\y\3\y\9\0\t\p\8\j\9\2\f\z\k\m\8\x\g\x\d\6\8\g\x\k\s\q\8\h\2\8\k\n\0\6\j\f\8\s\y\g\9\e\3\9\h\z\z\0\m\i\6\1\c\u\i\8\h\o\6\o\4\f\s\7\c\u\s\9\o\8\h\9\t\i\u\j\5\r\a\p\c\z\p\2\9\0\g\h\v\2\4\t\u\1\x\t\4\x\g\i\m\t\u\0\0\1\w\x\s\r\3\3\6\k\3\g\u\p\5\3\n\d\u\i\v\k\d\l\k\5\o\s\k\9\v\4\f\g\7\6\p\m\u\w\b\h\c\s\m\g\z\s\3\y\8\x\f\1\s\i\8\5\b\8\7\5\h\t\5\a\v\r\p\5\d\g\g\c\8\9\9\e\c\0\z\m\7\2\l\y\y\g\j\a\r\v\9\0\e\x\2\z\a\h\m\s\0\z\e\e\g\a\r\l\l\h\5\w\t\0\l\v\2\h\i\y\k\3\v\5\7\m\7\y\g\5\5\g\k\m\t\3\7\j\b\t\1\z\n\a\m\m\8\r\y\s\5\u\l\b\a\i\z\d\y\1\2\3\e\6\1\c\e\d\w\b\3\b\c\c\p\o\v\s\v\x\v\k\i\1\5\1\d\g\2\g\v\9\s\4\e\i\n\p\r\4\a\a\c\i\g\1\q\t\9\g\b\m\6\o ]] 00:06:32.085 11:30:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:32.085 11:30:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:32.085 [2024-07-12 11:30:35.455087] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:32.085 [2024-07-12 11:30:35.455194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63267 ] 00:06:32.343 [2024-07-12 11:30:35.591835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.343 [2024-07-12 11:30:35.702129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.343 [2024-07-12 11:30:35.755659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.602  Copying: 512/512 [B] (average 250 kBps) 00:06:32.602 00:06:32.602 11:30:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ v73h7uyh93230px8o6d8kbsllrbar4zmqlhgzetf84jknlsq3hl6eluqjba6ejafa2c82bqqz1rui6mikcok58dvhlta16bmf0tmpto3p2qj13d013lrahidl5btow5yjw61ptyy92vuunbahcsvsbiklugvyuto0jfw2ls274hbvw0shis9h38ah2reuloy3y90tp8j92fzkm8xgxd68gxksq8h28kn06jf8syg9e39hzz0mi61cui8ho6o4fs7cus9o8h9tiuj5rapczp290ghv24tu1xt4xgimtu001wxsr336k3gup53nduivkdlk5osk9v4fg76pmuwbhcsmgzs3y8xf1si85b875ht5avrp5dggc899ec0zm72lyygjarv90ex2zahms0zeegarllh5wt0lv2hiyk3v57m7yg55gkmt37jbt1znamm8rys5ulbaizdy123e61cedwb3bccpovsvxvki151dg2gv9s4einpr4aacig1qt9gbm6o == \v\7\3\h\7\u\y\h\9\3\2\3\0\p\x\8\o\6\d\8\k\b\s\l\l\r\b\a\r\4\z\m\q\l\h\g\z\e\t\f\8\4\j\k\n\l\s\q\3\h\l\6\e\l\u\q\j\b\a\6\e\j\a\f\a\2\c\8\2\b\q\q\z\1\r\u\i\6\m\i\k\c\o\k\5\8\d\v\h\l\t\a\1\6\b\m\f\0\t\m\p\t\o\3\p\2\q\j\1\3\d\0\1\3\l\r\a\h\i\d\l\5\b\t\o\w\5\y\j\w\6\1\p\t\y\y\9\2\v\u\u\n\b\a\h\c\s\v\s\b\i\k\l\u\g\v\y\u\t\o\0\j\f\w\2\l\s\2\7\4\h\b\v\w\0\s\h\i\s\9\h\3\8\a\h\2\r\e\u\l\o\y\3\y\9\0\t\p\8\j\9\2\f\z\k\m\8\x\g\x\d\6\8\g\x\k\s\q\8\h\2\8\k\n\0\6\j\f\8\s\y\g\9\e\3\9\h\z\z\0\m\i\6\1\c\u\i\8\h\o\6\o\4\f\s\7\c\u\s\9\o\8\h\9\t\i\u\j\5\r\a\p\c\z\p\2\9\0\g\h\v\2\4\t\u\1\x\t\4\x\g\i\m\t\u\0\0\1\w\x\s\r\3\3\6\k\3\g\u\p\5\3\n\d\u\i\v\k\d\l\k\5\o\s\k\9\v\4\f\g\7\6\p\m\u\w\b\h\c\s\m\g\z\s\3\y\8\x\f\1\s\i\8\5\b\8\7\5\h\t\5\a\v\r\p\5\d\g\g\c\8\9\9\e\c\0\z\m\7\2\l\y\y\g\j\a\r\v\9\0\e\x\2\z\a\h\m\s\0\z\e\e\g\a\r\l\l\h\5\w\t\0\l\v\2\h\i\y\k\3\v\5\7\m\7\y\g\5\5\g\k\m\t\3\7\j\b\t\1\z\n\a\m\m\8\r\y\s\5\u\l\b\a\i\z\d\y\1\2\3\e\6\1\c\e\d\w\b\3\b\c\c\p\o\v\s\v\x\v\k\i\1\5\1\d\g\2\g\v\9\s\4\e\i\n\p\r\4\a\a\c\i\g\1\q\t\9\g\b\m\6\o ]] 00:06:32.602 00:06:32.602 real 0m4.932s 00:06:32.602 user 0m2.895s 00:06:32.602 sys 0m2.199s 00:06:32.602 ************************************ 00:06:32.602 END TEST dd_flags_misc 00:06:32.602 ************************************ 00:06:32.602 11:30:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.602 11:30:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:32.861 * Second test run, disabling liburing, forcing AIO 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:32.861 ************************************ 00:06:32.861 START TEST dd_flag_append_forced_aio 00:06:32.861 ************************************ 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=g9lx1nsgnowngrmqx3fxuduppf5hg3jo 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=403m7q5ji934t3nlciy1mihlk7p2s8z7 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s g9lx1nsgnowngrmqx3fxuduppf5hg3jo 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 403m7q5ji934t3nlciy1mihlk7p2s8z7 00:06:32.861 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:32.861 [2024-07-12 11:30:36.133299] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:32.861 [2024-07-12 11:30:36.133409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63294 ] 00:06:32.861 [2024-07-12 11:30:36.272089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.119 [2024-07-12 11:30:36.381860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.119 [2024-07-12 11:30:36.434357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.377  Copying: 32/32 [B] (average 31 kBps) 00:06:33.377 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 403m7q5ji934t3nlciy1mihlk7p2s8z7g9lx1nsgnowngrmqx3fxuduppf5hg3jo == \4\0\3\m\7\q\5\j\i\9\3\4\t\3\n\l\c\i\y\1\m\i\h\l\k\7\p\2\s\8\z\7\g\9\l\x\1\n\s\g\n\o\w\n\g\r\m\q\x\3\f\x\u\d\u\p\p\f\5\h\g\3\j\o ]] 00:06:33.377 00:06:33.377 real 0m0.628s 00:06:33.377 user 0m0.357s 00:06:33.377 sys 0m0.151s 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.377 ************************************ 00:06:33.377 END TEST dd_flag_append_forced_aio 00:06:33.377 ************************************ 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:33.377 ************************************ 00:06:33.377 START TEST dd_flag_directory_forced_aio 00:06:33.377 ************************************ 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.377 11:30:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.377 [2024-07-12 11:30:36.803484] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:33.377 [2024-07-12 11:30:36.803624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63322 ] 00:06:33.634 [2024-07-12 11:30:36.941951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.634 [2024-07-12 11:30:37.056369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.891 [2024-07-12 11:30:37.109993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.891 [2024-07-12 11:30:37.145259] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.891 [2024-07-12 11:30:37.145316] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.891 [2024-07-12 11:30:37.145330] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.891 [2024-07-12 11:30:37.261118] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.148 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.148 [2024-07-12 11:30:37.414358] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:34.148 [2024-07-12 11:30:37.414475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63337 ] 00:06:34.148 [2024-07-12 11:30:37.553185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.406 [2024-07-12 11:30:37.662973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.406 [2024-07-12 11:30:37.716682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.406 [2024-07-12 11:30:37.750607] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.406 [2024-07-12 11:30:37.750663] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.406 [2024-07-12 11:30:37.750678] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.722 [2024-07-12 11:30:37.861379] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.722 ************************************ 00:06:34.722 END TEST dd_flag_directory_forced_aio 00:06:34.722 ************************************ 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.722 00:06:34.722 real 0m1.213s 00:06:34.722 user 0m0.704s 00:06:34.722 sys 0m0.298s 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.722 11:30:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.722 ************************************ 00:06:34.722 START TEST dd_flag_nofollow_forced_aio 00:06:34.722 ************************************ 00:06:34.722 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:06:34.722 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.722 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.722 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.722 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.723 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.723 [2024-07-12 11:30:38.069759] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:34.723 [2024-07-12 11:30:38.069860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63360 ] 00:06:34.981 [2024-07-12 11:30:38.205242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.981 [2024-07-12 11:30:38.324776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.981 [2024-07-12 11:30:38.379379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.981 [2024-07-12 11:30:38.416000] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:34.981 [2024-07-12 11:30:38.416074] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:34.981 [2024-07-12 11:30:38.416090] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.239 [2024-07-12 11:30:38.533414] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.239 11:30:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.497 [2024-07-12 11:30:38.697543] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:35.497 [2024-07-12 11:30:38.697691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63375 ] 00:06:35.497 [2024-07-12 11:30:38.837659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.755 [2024-07-12 11:30:38.955427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.755 [2024-07-12 11:30:39.011356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.755 [2024-07-12 11:30:39.048759] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.755 [2024-07-12 11:30:39.048812] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.755 [2024-07-12 11:30:39.048853] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.755 [2024-07-12 11:30:39.171031] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.012 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.012 [2024-07-12 11:30:39.335282] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:36.012 [2024-07-12 11:30:39.335427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63391 ] 00:06:36.271 [2024-07-12 11:30:39.475286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.271 [2024-07-12 11:30:39.591979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.271 [2024-07-12 11:30:39.646764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.529  Copying: 512/512 [B] (average 500 kBps) 00:06:36.529 00:06:36.529 ************************************ 00:06:36.529 END TEST dd_flag_nofollow_forced_aio 00:06:36.529 ************************************ 00:06:36.529 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ l36xv8r96i0ojgqpttploomqdq5jchol2tjzha7uhwe4pi4l5ip5gze0kd71l7wpmaacwlu7fuykzklqg3t85ezl2owxwnvzkcikwodwpwh9pbrw4s6ryg3k9s6uc8x9bm9cxp8kf4xpfvr0g7e76k5k9xwn6iyov6k9q6kjagc1rv2ry7s5omi3amqpc5lposdxxbmf4x2a8p8heikkvvhhutmye9of1e2ssucdzbj1tbrem8m8vq50kxlogblxgjrb8porwmvvx4lf3dyu0eiluenmslcopj68ienv1n92rhb85d2kqnix2jzi110xqpxz5ftm5o98uh6oe4ndnd8yhr5eddrwrloqghk6wihwf45edlpx0p1q0jqgmx347yt7lhoc3i9dsunxliyvgfc2x4u4gzdds6fn0w59maz4zow7sqcp52vnijblqr8v5pxj5cn7hjkukjj5g8gj3dcqtjcftejh26jsqfz25eef8ywppowjvfzdui43p85i == \l\3\6\x\v\8\r\9\6\i\0\o\j\g\q\p\t\t\p\l\o\o\m\q\d\q\5\j\c\h\o\l\2\t\j\z\h\a\7\u\h\w\e\4\p\i\4\l\5\i\p\5\g\z\e\0\k\d\7\1\l\7\w\p\m\a\a\c\w\l\u\7\f\u\y\k\z\k\l\q\g\3\t\8\5\e\z\l\2\o\w\x\w\n\v\z\k\c\i\k\w\o\d\w\p\w\h\9\p\b\r\w\4\s\6\r\y\g\3\k\9\s\6\u\c\8\x\9\b\m\9\c\x\p\8\k\f\4\x\p\f\v\r\0\g\7\e\7\6\k\5\k\9\x\w\n\6\i\y\o\v\6\k\9\q\6\k\j\a\g\c\1\r\v\2\r\y\7\s\5\o\m\i\3\a\m\q\p\c\5\l\p\o\s\d\x\x\b\m\f\4\x\2\a\8\p\8\h\e\i\k\k\v\v\h\h\u\t\m\y\e\9\o\f\1\e\2\s\s\u\c\d\z\b\j\1\t\b\r\e\m\8\m\8\v\q\5\0\k\x\l\o\g\b\l\x\g\j\r\b\8\p\o\r\w\m\v\v\x\4\l\f\3\d\y\u\0\e\i\l\u\e\n\m\s\l\c\o\p\j\6\8\i\e\n\v\1\n\9\2\r\h\b\8\5\d\2\k\q\n\i\x\2\j\z\i\1\1\0\x\q\p\x\z\5\f\t\m\5\o\9\8\u\h\6\o\e\4\n\d\n\d\8\y\h\r\5\e\d\d\r\w\r\l\o\q\g\h\k\6\w\i\h\w\f\4\5\e\d\l\p\x\0\p\1\q\0\j\q\g\m\x\3\4\7\y\t\7\l\h\o\c\3\i\9\d\s\u\n\x\l\i\y\v\g\f\c\2\x\4\u\4\g\z\d\d\s\6\f\n\0\w\5\9\m\a\z\4\z\o\w\7\s\q\c\p\5\2\v\n\i\j\b\l\q\r\8\v\5\p\x\j\5\c\n\7\h\j\k\u\k\j\j\5\g\8\g\j\3\d\c\q\t\j\c\f\t\e\j\h\2\6\j\s\q\f\z\2\5\e\e\f\8\y\w\p\p\o\w\j\v\f\z\d\u\i\4\3\p\8\5\i ]] 00:06:36.529 00:06:36.529 real 0m1.908s 00:06:36.529 user 0m1.107s 00:06:36.529 sys 0m0.467s 00:06:36.529 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.529 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.529 11:30:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:36.529 11:30:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:36.529 11:30:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.530 11:30:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.530 11:30:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:36.530 ************************************ 00:06:36.530 START TEST dd_flag_noatime_forced_aio 00:06:36.530 ************************************ 00:06:36.530 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:06:36.530 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:36.530 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:36.530 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:36.530 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:36.530 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.530 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.788 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720783839 00:06:36.788 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.788 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720783839 00:06:36.788 11:30:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:37.723 11:30:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.723 [2024-07-12 11:30:41.046195] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:37.723 [2024-07-12 11:30:41.046312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63427 ] 00:06:37.981 [2024-07-12 11:30:41.185717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.981 [2024-07-12 11:30:41.303111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.981 [2024-07-12 11:30:41.358954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.240  Copying: 512/512 [B] (average 500 kBps) 00:06:38.240 00:06:38.240 11:30:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.240 11:30:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720783839 )) 00:06:38.240 11:30:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.240 11:30:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720783839 )) 00:06:38.240 11:30:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.498 [2024-07-12 11:30:41.694384] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:38.498 [2024-07-12 11:30:41.694478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63444 ] 00:06:38.498 [2024-07-12 11:30:41.832825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.498 [2024-07-12 11:30:41.940480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.756 [2024-07-12 11:30:41.993370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.014  Copying: 512/512 [B] (average 500 kBps) 00:06:39.014 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.014 ************************************ 00:06:39.014 END TEST dd_flag_noatime_forced_aio 00:06:39.014 ************************************ 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720783842 )) 00:06:39.014 00:06:39.014 real 0m2.286s 00:06:39.014 user 0m0.722s 00:06:39.014 sys 0m0.321s 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:39.014 ************************************ 00:06:39.014 START TEST dd_flags_misc_forced_aio 00:06:39.014 ************************************ 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.014 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:39.014 [2024-07-12 11:30:42.353742] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:39.014 [2024-07-12 11:30:42.353824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63471 ] 00:06:39.277 [2024-07-12 11:30:42.487205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.277 [2024-07-12 11:30:42.606341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.277 [2024-07-12 11:30:42.661722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.547  Copying: 512/512 [B] (average 500 kBps) 00:06:39.547 00:06:39.547 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ srgssi96tk806ohekenn8r7w8jiu9yyuxgkbtladrjzmrstyca0jv6yybstrt06q7558b64w7798n0eib0fnugqj9fuk6tj4nqmcvnui1wrzrub9odkz0sl5dlu81lkjgey1vttxw3n7a0l6tdufadwlm2qhr75ip8n1qa04xe5265b0uhogpo8ap2eaxt9z1dgm0ixaif594x2wez56zvjx0fpcbmap7vqpjbfbz3eux76yinghlfmuf8rb123m2a86uvfjwwv10u7o83fr493gcvfn1byp26f21n2sryy1hhujnwe7rf87b040yl75cl9dtq9wliy7bxmuc0yy013m2dgnboao8c9xussev0rhftcscw7nku9ppiyt9akzmgilib1mz7l01ffx9auozm2a4la0nx3se7ist29kksu0yfnfabhfqhtetjx4sdf0eqvwdpuqfcqsvcyffha7law7go0pf1impghuy7w9xyq9wscczindw7yjob44xwxl == \s\r\g\s\s\i\9\6\t\k\8\0\6\o\h\e\k\e\n\n\8\r\7\w\8\j\i\u\9\y\y\u\x\g\k\b\t\l\a\d\r\j\z\m\r\s\t\y\c\a\0\j\v\6\y\y\b\s\t\r\t\0\6\q\7\5\5\8\b\6\4\w\7\7\9\8\n\0\e\i\b\0\f\n\u\g\q\j\9\f\u\k\6\t\j\4\n\q\m\c\v\n\u\i\1\w\r\z\r\u\b\9\o\d\k\z\0\s\l\5\d\l\u\8\1\l\k\j\g\e\y\1\v\t\t\x\w\3\n\7\a\0\l\6\t\d\u\f\a\d\w\l\m\2\q\h\r\7\5\i\p\8\n\1\q\a\0\4\x\e\5\2\6\5\b\0\u\h\o\g\p\o\8\a\p\2\e\a\x\t\9\z\1\d\g\m\0\i\x\a\i\f\5\9\4\x\2\w\e\z\5\6\z\v\j\x\0\f\p\c\b\m\a\p\7\v\q\p\j\b\f\b\z\3\e\u\x\7\6\y\i\n\g\h\l\f\m\u\f\8\r\b\1\2\3\m\2\a\8\6\u\v\f\j\w\w\v\1\0\u\7\o\8\3\f\r\4\9\3\g\c\v\f\n\1\b\y\p\2\6\f\2\1\n\2\s\r\y\y\1\h\h\u\j\n\w\e\7\r\f\8\7\b\0\4\0\y\l\7\5\c\l\9\d\t\q\9\w\l\i\y\7\b\x\m\u\c\0\y\y\0\1\3\m\2\d\g\n\b\o\a\o\8\c\9\x\u\s\s\e\v\0\r\h\f\t\c\s\c\w\7\n\k\u\9\p\p\i\y\t\9\a\k\z\m\g\i\l\i\b\1\m\z\7\l\0\1\f\f\x\9\a\u\o\z\m\2\a\4\l\a\0\n\x\3\s\e\7\i\s\t\2\9\k\k\s\u\0\y\f\n\f\a\b\h\f\q\h\t\e\t\j\x\4\s\d\f\0\e\q\v\w\d\p\u\q\f\c\q\s\v\c\y\f\f\h\a\7\l\a\w\7\g\o\0\p\f\1\i\m\p\g\h\u\y\7\w\9\x\y\q\9\w\s\c\c\z\i\n\d\w\7\y\j\o\b\4\4\x\w\x\l ]] 00:06:39.547 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.547 11:30:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:39.805 [2024-07-12 11:30:43.014416] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:39.805 [2024-07-12 11:30:43.014532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63478 ] 00:06:39.805 [2024-07-12 11:30:43.152320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.082 [2024-07-12 11:30:43.269683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.082 [2024-07-12 11:30:43.323081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.339  Copying: 512/512 [B] (average 500 kBps) 00:06:40.339 00:06:40.339 11:30:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ srgssi96tk806ohekenn8r7w8jiu9yyuxgkbtladrjzmrstyca0jv6yybstrt06q7558b64w7798n0eib0fnugqj9fuk6tj4nqmcvnui1wrzrub9odkz0sl5dlu81lkjgey1vttxw3n7a0l6tdufadwlm2qhr75ip8n1qa04xe5265b0uhogpo8ap2eaxt9z1dgm0ixaif594x2wez56zvjx0fpcbmap7vqpjbfbz3eux76yinghlfmuf8rb123m2a86uvfjwwv10u7o83fr493gcvfn1byp26f21n2sryy1hhujnwe7rf87b040yl75cl9dtq9wliy7bxmuc0yy013m2dgnboao8c9xussev0rhftcscw7nku9ppiyt9akzmgilib1mz7l01ffx9auozm2a4la0nx3se7ist29kksu0yfnfabhfqhtetjx4sdf0eqvwdpuqfcqsvcyffha7law7go0pf1impghuy7w9xyq9wscczindw7yjob44xwxl == \s\r\g\s\s\i\9\6\t\k\8\0\6\o\h\e\k\e\n\n\8\r\7\w\8\j\i\u\9\y\y\u\x\g\k\b\t\l\a\d\r\j\z\m\r\s\t\y\c\a\0\j\v\6\y\y\b\s\t\r\t\0\6\q\7\5\5\8\b\6\4\w\7\7\9\8\n\0\e\i\b\0\f\n\u\g\q\j\9\f\u\k\6\t\j\4\n\q\m\c\v\n\u\i\1\w\r\z\r\u\b\9\o\d\k\z\0\s\l\5\d\l\u\8\1\l\k\j\g\e\y\1\v\t\t\x\w\3\n\7\a\0\l\6\t\d\u\f\a\d\w\l\m\2\q\h\r\7\5\i\p\8\n\1\q\a\0\4\x\e\5\2\6\5\b\0\u\h\o\g\p\o\8\a\p\2\e\a\x\t\9\z\1\d\g\m\0\i\x\a\i\f\5\9\4\x\2\w\e\z\5\6\z\v\j\x\0\f\p\c\b\m\a\p\7\v\q\p\j\b\f\b\z\3\e\u\x\7\6\y\i\n\g\h\l\f\m\u\f\8\r\b\1\2\3\m\2\a\8\6\u\v\f\j\w\w\v\1\0\u\7\o\8\3\f\r\4\9\3\g\c\v\f\n\1\b\y\p\2\6\f\2\1\n\2\s\r\y\y\1\h\h\u\j\n\w\e\7\r\f\8\7\b\0\4\0\y\l\7\5\c\l\9\d\t\q\9\w\l\i\y\7\b\x\m\u\c\0\y\y\0\1\3\m\2\d\g\n\b\o\a\o\8\c\9\x\u\s\s\e\v\0\r\h\f\t\c\s\c\w\7\n\k\u\9\p\p\i\y\t\9\a\k\z\m\g\i\l\i\b\1\m\z\7\l\0\1\f\f\x\9\a\u\o\z\m\2\a\4\l\a\0\n\x\3\s\e\7\i\s\t\2\9\k\k\s\u\0\y\f\n\f\a\b\h\f\q\h\t\e\t\j\x\4\s\d\f\0\e\q\v\w\d\p\u\q\f\c\q\s\v\c\y\f\f\h\a\7\l\a\w\7\g\o\0\p\f\1\i\m\p\g\h\u\y\7\w\9\x\y\q\9\w\s\c\c\z\i\n\d\w\7\y\j\o\b\4\4\x\w\x\l ]] 00:06:40.339 11:30:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.339 11:30:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:40.339 [2024-07-12 11:30:43.653969] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:40.339 [2024-07-12 11:30:43.654077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63491 ] 00:06:40.597 [2024-07-12 11:30:43.790331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.597 [2024-07-12 11:30:43.905076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.597 [2024-07-12 11:30:43.957611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.855  Copying: 512/512 [B] (average 125 kBps) 00:06:40.855 00:06:40.855 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ srgssi96tk806ohekenn8r7w8jiu9yyuxgkbtladrjzmrstyca0jv6yybstrt06q7558b64w7798n0eib0fnugqj9fuk6tj4nqmcvnui1wrzrub9odkz0sl5dlu81lkjgey1vttxw3n7a0l6tdufadwlm2qhr75ip8n1qa04xe5265b0uhogpo8ap2eaxt9z1dgm0ixaif594x2wez56zvjx0fpcbmap7vqpjbfbz3eux76yinghlfmuf8rb123m2a86uvfjwwv10u7o83fr493gcvfn1byp26f21n2sryy1hhujnwe7rf87b040yl75cl9dtq9wliy7bxmuc0yy013m2dgnboao8c9xussev0rhftcscw7nku9ppiyt9akzmgilib1mz7l01ffx9auozm2a4la0nx3se7ist29kksu0yfnfabhfqhtetjx4sdf0eqvwdpuqfcqsvcyffha7law7go0pf1impghuy7w9xyq9wscczindw7yjob44xwxl == \s\r\g\s\s\i\9\6\t\k\8\0\6\o\h\e\k\e\n\n\8\r\7\w\8\j\i\u\9\y\y\u\x\g\k\b\t\l\a\d\r\j\z\m\r\s\t\y\c\a\0\j\v\6\y\y\b\s\t\r\t\0\6\q\7\5\5\8\b\6\4\w\7\7\9\8\n\0\e\i\b\0\f\n\u\g\q\j\9\f\u\k\6\t\j\4\n\q\m\c\v\n\u\i\1\w\r\z\r\u\b\9\o\d\k\z\0\s\l\5\d\l\u\8\1\l\k\j\g\e\y\1\v\t\t\x\w\3\n\7\a\0\l\6\t\d\u\f\a\d\w\l\m\2\q\h\r\7\5\i\p\8\n\1\q\a\0\4\x\e\5\2\6\5\b\0\u\h\o\g\p\o\8\a\p\2\e\a\x\t\9\z\1\d\g\m\0\i\x\a\i\f\5\9\4\x\2\w\e\z\5\6\z\v\j\x\0\f\p\c\b\m\a\p\7\v\q\p\j\b\f\b\z\3\e\u\x\7\6\y\i\n\g\h\l\f\m\u\f\8\r\b\1\2\3\m\2\a\8\6\u\v\f\j\w\w\v\1\0\u\7\o\8\3\f\r\4\9\3\g\c\v\f\n\1\b\y\p\2\6\f\2\1\n\2\s\r\y\y\1\h\h\u\j\n\w\e\7\r\f\8\7\b\0\4\0\y\l\7\5\c\l\9\d\t\q\9\w\l\i\y\7\b\x\m\u\c\0\y\y\0\1\3\m\2\d\g\n\b\o\a\o\8\c\9\x\u\s\s\e\v\0\r\h\f\t\c\s\c\w\7\n\k\u\9\p\p\i\y\t\9\a\k\z\m\g\i\l\i\b\1\m\z\7\l\0\1\f\f\x\9\a\u\o\z\m\2\a\4\l\a\0\n\x\3\s\e\7\i\s\t\2\9\k\k\s\u\0\y\f\n\f\a\b\h\f\q\h\t\e\t\j\x\4\s\d\f\0\e\q\v\w\d\p\u\q\f\c\q\s\v\c\y\f\f\h\a\7\l\a\w\7\g\o\0\p\f\1\i\m\p\g\h\u\y\7\w\9\x\y\q\9\w\s\c\c\z\i\n\d\w\7\y\j\o\b\4\4\x\w\x\l ]] 00:06:40.855 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.855 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:40.855 [2024-07-12 11:30:44.276858] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:40.855 [2024-07-12 11:30:44.276958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63493 ] 00:06:41.114 [2024-07-12 11:30:44.416389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.114 [2024-07-12 11:30:44.537020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.373 [2024-07-12 11:30:44.592543] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.632  Copying: 512/512 [B] (average 500 kBps) 00:06:41.632 00:06:41.632 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ srgssi96tk806ohekenn8r7w8jiu9yyuxgkbtladrjzmrstyca0jv6yybstrt06q7558b64w7798n0eib0fnugqj9fuk6tj4nqmcvnui1wrzrub9odkz0sl5dlu81lkjgey1vttxw3n7a0l6tdufadwlm2qhr75ip8n1qa04xe5265b0uhogpo8ap2eaxt9z1dgm0ixaif594x2wez56zvjx0fpcbmap7vqpjbfbz3eux76yinghlfmuf8rb123m2a86uvfjwwv10u7o83fr493gcvfn1byp26f21n2sryy1hhujnwe7rf87b040yl75cl9dtq9wliy7bxmuc0yy013m2dgnboao8c9xussev0rhftcscw7nku9ppiyt9akzmgilib1mz7l01ffx9auozm2a4la0nx3se7ist29kksu0yfnfabhfqhtetjx4sdf0eqvwdpuqfcqsvcyffha7law7go0pf1impghuy7w9xyq9wscczindw7yjob44xwxl == \s\r\g\s\s\i\9\6\t\k\8\0\6\o\h\e\k\e\n\n\8\r\7\w\8\j\i\u\9\y\y\u\x\g\k\b\t\l\a\d\r\j\z\m\r\s\t\y\c\a\0\j\v\6\y\y\b\s\t\r\t\0\6\q\7\5\5\8\b\6\4\w\7\7\9\8\n\0\e\i\b\0\f\n\u\g\q\j\9\f\u\k\6\t\j\4\n\q\m\c\v\n\u\i\1\w\r\z\r\u\b\9\o\d\k\z\0\s\l\5\d\l\u\8\1\l\k\j\g\e\y\1\v\t\t\x\w\3\n\7\a\0\l\6\t\d\u\f\a\d\w\l\m\2\q\h\r\7\5\i\p\8\n\1\q\a\0\4\x\e\5\2\6\5\b\0\u\h\o\g\p\o\8\a\p\2\e\a\x\t\9\z\1\d\g\m\0\i\x\a\i\f\5\9\4\x\2\w\e\z\5\6\z\v\j\x\0\f\p\c\b\m\a\p\7\v\q\p\j\b\f\b\z\3\e\u\x\7\6\y\i\n\g\h\l\f\m\u\f\8\r\b\1\2\3\m\2\a\8\6\u\v\f\j\w\w\v\1\0\u\7\o\8\3\f\r\4\9\3\g\c\v\f\n\1\b\y\p\2\6\f\2\1\n\2\s\r\y\y\1\h\h\u\j\n\w\e\7\r\f\8\7\b\0\4\0\y\l\7\5\c\l\9\d\t\q\9\w\l\i\y\7\b\x\m\u\c\0\y\y\0\1\3\m\2\d\g\n\b\o\a\o\8\c\9\x\u\s\s\e\v\0\r\h\f\t\c\s\c\w\7\n\k\u\9\p\p\i\y\t\9\a\k\z\m\g\i\l\i\b\1\m\z\7\l\0\1\f\f\x\9\a\u\o\z\m\2\a\4\l\a\0\n\x\3\s\e\7\i\s\t\2\9\k\k\s\u\0\y\f\n\f\a\b\h\f\q\h\t\e\t\j\x\4\s\d\f\0\e\q\v\w\d\p\u\q\f\c\q\s\v\c\y\f\f\h\a\7\l\a\w\7\g\o\0\p\f\1\i\m\p\g\h\u\y\7\w\9\x\y\q\9\w\s\c\c\z\i\n\d\w\7\y\j\o\b\4\4\x\w\x\l ]] 00:06:41.632 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:41.632 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:41.632 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:41.632 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:41.632 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.632 11:30:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:41.632 [2024-07-12 11:30:44.949927] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:41.633 [2024-07-12 11:30:44.950054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63506 ] 00:06:41.892 [2024-07-12 11:30:45.088374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.892 [2024-07-12 11:30:45.218265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.892 [2024-07-12 11:30:45.277391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.150  Copying: 512/512 [B] (average 500 kBps) 00:06:42.150 00:06:42.150 11:30:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gzr8up57i30db5emwbzncp67ywwraf7719h1q6aq2lrehso8kmp461dw7v2e8cgp7ma95ov6r3c8fv0uihpt8x5khmx1j1gp8fnxfcxripc1j9oqd4728nfzcdlhw2te8pwx1bzo6t2bsbpi7a72sep0sa9q0cao5jk74tqx81vs9gtr6lxsl50rn7svnf7c2lwj06xaxithusfi8dz7si88u4mwse4jq6n46d3f6j1ajkbjizcw8j89545hjcrvwdippwrnum417ezjzg57q722bsk96q3zjcofu1p7wf2rqak57ijbb0ee4tqz3wftgs4a5ox5wddsiull1ina7zrgcoiyysrrj4eo8qb74bvzo9i3wwmtdjsok44ny3ff84x7il4ghk8wp65ybevee0zygfv5qir1dy1nlhxyiife9el7c4f0apb93qmwnhvvcqg85l2rd02gb1ldfzbn8o1engmf4csob8mykmloes83dub6ht6jha6k46w7ar0j == \g\z\r\8\u\p\5\7\i\3\0\d\b\5\e\m\w\b\z\n\c\p\6\7\y\w\w\r\a\f\7\7\1\9\h\1\q\6\a\q\2\l\r\e\h\s\o\8\k\m\p\4\6\1\d\w\7\v\2\e\8\c\g\p\7\m\a\9\5\o\v\6\r\3\c\8\f\v\0\u\i\h\p\t\8\x\5\k\h\m\x\1\j\1\g\p\8\f\n\x\f\c\x\r\i\p\c\1\j\9\o\q\d\4\7\2\8\n\f\z\c\d\l\h\w\2\t\e\8\p\w\x\1\b\z\o\6\t\2\b\s\b\p\i\7\a\7\2\s\e\p\0\s\a\9\q\0\c\a\o\5\j\k\7\4\t\q\x\8\1\v\s\9\g\t\r\6\l\x\s\l\5\0\r\n\7\s\v\n\f\7\c\2\l\w\j\0\6\x\a\x\i\t\h\u\s\f\i\8\d\z\7\s\i\8\8\u\4\m\w\s\e\4\j\q\6\n\4\6\d\3\f\6\j\1\a\j\k\b\j\i\z\c\w\8\j\8\9\5\4\5\h\j\c\r\v\w\d\i\p\p\w\r\n\u\m\4\1\7\e\z\j\z\g\5\7\q\7\2\2\b\s\k\9\6\q\3\z\j\c\o\f\u\1\p\7\w\f\2\r\q\a\k\5\7\i\j\b\b\0\e\e\4\t\q\z\3\w\f\t\g\s\4\a\5\o\x\5\w\d\d\s\i\u\l\l\1\i\n\a\7\z\r\g\c\o\i\y\y\s\r\r\j\4\e\o\8\q\b\7\4\b\v\z\o\9\i\3\w\w\m\t\d\j\s\o\k\4\4\n\y\3\f\f\8\4\x\7\i\l\4\g\h\k\8\w\p\6\5\y\b\e\v\e\e\0\z\y\g\f\v\5\q\i\r\1\d\y\1\n\l\h\x\y\i\i\f\e\9\e\l\7\c\4\f\0\a\p\b\9\3\q\m\w\n\h\v\v\c\q\g\8\5\l\2\r\d\0\2\g\b\1\l\d\f\z\b\n\8\o\1\e\n\g\m\f\4\c\s\o\b\8\m\y\k\m\l\o\e\s\8\3\d\u\b\6\h\t\6\j\h\a\6\k\4\6\w\7\a\r\0\j ]] 00:06:42.150 11:30:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.150 11:30:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:42.408 [2024-07-12 11:30:45.602437] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:42.408 [2024-07-12 11:30:45.602529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63514 ] 00:06:42.408 [2024-07-12 11:30:45.739828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.408 [2024-07-12 11:30:45.838756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.666 [2024-07-12 11:30:45.892790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.925  Copying: 512/512 [B] (average 500 kBps) 00:06:42.925 00:06:42.925 11:30:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gzr8up57i30db5emwbzncp67ywwraf7719h1q6aq2lrehso8kmp461dw7v2e8cgp7ma95ov6r3c8fv0uihpt8x5khmx1j1gp8fnxfcxripc1j9oqd4728nfzcdlhw2te8pwx1bzo6t2bsbpi7a72sep0sa9q0cao5jk74tqx81vs9gtr6lxsl50rn7svnf7c2lwj06xaxithusfi8dz7si88u4mwse4jq6n46d3f6j1ajkbjizcw8j89545hjcrvwdippwrnum417ezjzg57q722bsk96q3zjcofu1p7wf2rqak57ijbb0ee4tqz3wftgs4a5ox5wddsiull1ina7zrgcoiyysrrj4eo8qb74bvzo9i3wwmtdjsok44ny3ff84x7il4ghk8wp65ybevee0zygfv5qir1dy1nlhxyiife9el7c4f0apb93qmwnhvvcqg85l2rd02gb1ldfzbn8o1engmf4csob8mykmloes83dub6ht6jha6k46w7ar0j == \g\z\r\8\u\p\5\7\i\3\0\d\b\5\e\m\w\b\z\n\c\p\6\7\y\w\w\r\a\f\7\7\1\9\h\1\q\6\a\q\2\l\r\e\h\s\o\8\k\m\p\4\6\1\d\w\7\v\2\e\8\c\g\p\7\m\a\9\5\o\v\6\r\3\c\8\f\v\0\u\i\h\p\t\8\x\5\k\h\m\x\1\j\1\g\p\8\f\n\x\f\c\x\r\i\p\c\1\j\9\o\q\d\4\7\2\8\n\f\z\c\d\l\h\w\2\t\e\8\p\w\x\1\b\z\o\6\t\2\b\s\b\p\i\7\a\7\2\s\e\p\0\s\a\9\q\0\c\a\o\5\j\k\7\4\t\q\x\8\1\v\s\9\g\t\r\6\l\x\s\l\5\0\r\n\7\s\v\n\f\7\c\2\l\w\j\0\6\x\a\x\i\t\h\u\s\f\i\8\d\z\7\s\i\8\8\u\4\m\w\s\e\4\j\q\6\n\4\6\d\3\f\6\j\1\a\j\k\b\j\i\z\c\w\8\j\8\9\5\4\5\h\j\c\r\v\w\d\i\p\p\w\r\n\u\m\4\1\7\e\z\j\z\g\5\7\q\7\2\2\b\s\k\9\6\q\3\z\j\c\o\f\u\1\p\7\w\f\2\r\q\a\k\5\7\i\j\b\b\0\e\e\4\t\q\z\3\w\f\t\g\s\4\a\5\o\x\5\w\d\d\s\i\u\l\l\1\i\n\a\7\z\r\g\c\o\i\y\y\s\r\r\j\4\e\o\8\q\b\7\4\b\v\z\o\9\i\3\w\w\m\t\d\j\s\o\k\4\4\n\y\3\f\f\8\4\x\7\i\l\4\g\h\k\8\w\p\6\5\y\b\e\v\e\e\0\z\y\g\f\v\5\q\i\r\1\d\y\1\n\l\h\x\y\i\i\f\e\9\e\l\7\c\4\f\0\a\p\b\9\3\q\m\w\n\h\v\v\c\q\g\8\5\l\2\r\d\0\2\g\b\1\l\d\f\z\b\n\8\o\1\e\n\g\m\f\4\c\s\o\b\8\m\y\k\m\l\o\e\s\8\3\d\u\b\6\h\t\6\j\h\a\6\k\4\6\w\7\a\r\0\j ]] 00:06:42.925 11:30:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.925 11:30:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:42.925 [2024-07-12 11:30:46.240871] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:42.925 [2024-07-12 11:30:46.240990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63521 ] 00:06:43.184 [2024-07-12 11:30:46.379918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.184 [2024-07-12 11:30:46.500354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.184 [2024-07-12 11:30:46.555664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.443  Copying: 512/512 [B] (average 500 kBps) 00:06:43.443 00:06:43.443 11:30:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gzr8up57i30db5emwbzncp67ywwraf7719h1q6aq2lrehso8kmp461dw7v2e8cgp7ma95ov6r3c8fv0uihpt8x5khmx1j1gp8fnxfcxripc1j9oqd4728nfzcdlhw2te8pwx1bzo6t2bsbpi7a72sep0sa9q0cao5jk74tqx81vs9gtr6lxsl50rn7svnf7c2lwj06xaxithusfi8dz7si88u4mwse4jq6n46d3f6j1ajkbjizcw8j89545hjcrvwdippwrnum417ezjzg57q722bsk96q3zjcofu1p7wf2rqak57ijbb0ee4tqz3wftgs4a5ox5wddsiull1ina7zrgcoiyysrrj4eo8qb74bvzo9i3wwmtdjsok44ny3ff84x7il4ghk8wp65ybevee0zygfv5qir1dy1nlhxyiife9el7c4f0apb93qmwnhvvcqg85l2rd02gb1ldfzbn8o1engmf4csob8mykmloes83dub6ht6jha6k46w7ar0j == \g\z\r\8\u\p\5\7\i\3\0\d\b\5\e\m\w\b\z\n\c\p\6\7\y\w\w\r\a\f\7\7\1\9\h\1\q\6\a\q\2\l\r\e\h\s\o\8\k\m\p\4\6\1\d\w\7\v\2\e\8\c\g\p\7\m\a\9\5\o\v\6\r\3\c\8\f\v\0\u\i\h\p\t\8\x\5\k\h\m\x\1\j\1\g\p\8\f\n\x\f\c\x\r\i\p\c\1\j\9\o\q\d\4\7\2\8\n\f\z\c\d\l\h\w\2\t\e\8\p\w\x\1\b\z\o\6\t\2\b\s\b\p\i\7\a\7\2\s\e\p\0\s\a\9\q\0\c\a\o\5\j\k\7\4\t\q\x\8\1\v\s\9\g\t\r\6\l\x\s\l\5\0\r\n\7\s\v\n\f\7\c\2\l\w\j\0\6\x\a\x\i\t\h\u\s\f\i\8\d\z\7\s\i\8\8\u\4\m\w\s\e\4\j\q\6\n\4\6\d\3\f\6\j\1\a\j\k\b\j\i\z\c\w\8\j\8\9\5\4\5\h\j\c\r\v\w\d\i\p\p\w\r\n\u\m\4\1\7\e\z\j\z\g\5\7\q\7\2\2\b\s\k\9\6\q\3\z\j\c\o\f\u\1\p\7\w\f\2\r\q\a\k\5\7\i\j\b\b\0\e\e\4\t\q\z\3\w\f\t\g\s\4\a\5\o\x\5\w\d\d\s\i\u\l\l\1\i\n\a\7\z\r\g\c\o\i\y\y\s\r\r\j\4\e\o\8\q\b\7\4\b\v\z\o\9\i\3\w\w\m\t\d\j\s\o\k\4\4\n\y\3\f\f\8\4\x\7\i\l\4\g\h\k\8\w\p\6\5\y\b\e\v\e\e\0\z\y\g\f\v\5\q\i\r\1\d\y\1\n\l\h\x\y\i\i\f\e\9\e\l\7\c\4\f\0\a\p\b\9\3\q\m\w\n\h\v\v\c\q\g\8\5\l\2\r\d\0\2\g\b\1\l\d\f\z\b\n\8\o\1\e\n\g\m\f\4\c\s\o\b\8\m\y\k\m\l\o\e\s\8\3\d\u\b\6\h\t\6\j\h\a\6\k\4\6\w\7\a\r\0\j ]] 00:06:43.443 11:30:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.443 11:30:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:43.702 [2024-07-12 11:30:46.909883] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:43.702 [2024-07-12 11:30:46.910000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63534 ] 00:06:43.702 [2024-07-12 11:30:47.052752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.960 [2024-07-12 11:30:47.166451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.960 [2024-07-12 11:30:47.219331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.219  Copying: 512/512 [B] (average 500 kBps) 00:06:44.219 00:06:44.219 11:30:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gzr8up57i30db5emwbzncp67ywwraf7719h1q6aq2lrehso8kmp461dw7v2e8cgp7ma95ov6r3c8fv0uihpt8x5khmx1j1gp8fnxfcxripc1j9oqd4728nfzcdlhw2te8pwx1bzo6t2bsbpi7a72sep0sa9q0cao5jk74tqx81vs9gtr6lxsl50rn7svnf7c2lwj06xaxithusfi8dz7si88u4mwse4jq6n46d3f6j1ajkbjizcw8j89545hjcrvwdippwrnum417ezjzg57q722bsk96q3zjcofu1p7wf2rqak57ijbb0ee4tqz3wftgs4a5ox5wddsiull1ina7zrgcoiyysrrj4eo8qb74bvzo9i3wwmtdjsok44ny3ff84x7il4ghk8wp65ybevee0zygfv5qir1dy1nlhxyiife9el7c4f0apb93qmwnhvvcqg85l2rd02gb1ldfzbn8o1engmf4csob8mykmloes83dub6ht6jha6k46w7ar0j == \g\z\r\8\u\p\5\7\i\3\0\d\b\5\e\m\w\b\z\n\c\p\6\7\y\w\w\r\a\f\7\7\1\9\h\1\q\6\a\q\2\l\r\e\h\s\o\8\k\m\p\4\6\1\d\w\7\v\2\e\8\c\g\p\7\m\a\9\5\o\v\6\r\3\c\8\f\v\0\u\i\h\p\t\8\x\5\k\h\m\x\1\j\1\g\p\8\f\n\x\f\c\x\r\i\p\c\1\j\9\o\q\d\4\7\2\8\n\f\z\c\d\l\h\w\2\t\e\8\p\w\x\1\b\z\o\6\t\2\b\s\b\p\i\7\a\7\2\s\e\p\0\s\a\9\q\0\c\a\o\5\j\k\7\4\t\q\x\8\1\v\s\9\g\t\r\6\l\x\s\l\5\0\r\n\7\s\v\n\f\7\c\2\l\w\j\0\6\x\a\x\i\t\h\u\s\f\i\8\d\z\7\s\i\8\8\u\4\m\w\s\e\4\j\q\6\n\4\6\d\3\f\6\j\1\a\j\k\b\j\i\z\c\w\8\j\8\9\5\4\5\h\j\c\r\v\w\d\i\p\p\w\r\n\u\m\4\1\7\e\z\j\z\g\5\7\q\7\2\2\b\s\k\9\6\q\3\z\j\c\o\f\u\1\p\7\w\f\2\r\q\a\k\5\7\i\j\b\b\0\e\e\4\t\q\z\3\w\f\t\g\s\4\a\5\o\x\5\w\d\d\s\i\u\l\l\1\i\n\a\7\z\r\g\c\o\i\y\y\s\r\r\j\4\e\o\8\q\b\7\4\b\v\z\o\9\i\3\w\w\m\t\d\j\s\o\k\4\4\n\y\3\f\f\8\4\x\7\i\l\4\g\h\k\8\w\p\6\5\y\b\e\v\e\e\0\z\y\g\f\v\5\q\i\r\1\d\y\1\n\l\h\x\y\i\i\f\e\9\e\l\7\c\4\f\0\a\p\b\9\3\q\m\w\n\h\v\v\c\q\g\8\5\l\2\r\d\0\2\g\b\1\l\d\f\z\b\n\8\o\1\e\n\g\m\f\4\c\s\o\b\8\m\y\k\m\l\o\e\s\8\3\d\u\b\6\h\t\6\j\h\a\6\k\4\6\w\7\a\r\0\j ]] 00:06:44.219 00:06:44.219 real 0m5.176s 00:06:44.219 user 0m3.004s 00:06:44.219 sys 0m1.181s 00:06:44.219 11:30:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.219 ************************************ 00:06:44.219 END TEST dd_flags_misc_forced_aio 00:06:44.219 ************************************ 00:06:44.219 11:30:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.219 11:30:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:44.219 11:30:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:44.219 11:30:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.219 11:30:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.219 ************************************ 00:06:44.219 END TEST spdk_dd_posix 00:06:44.219 ************************************ 00:06:44.219 00:06:44.219 real 0m22.754s 00:06:44.219 user 0m11.934s 00:06:44.219 sys 0m6.714s 00:06:44.219 11:30:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.219 11:30:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.219 11:30:47 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:44.219 11:30:47 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:44.219 11:30:47 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.219 11:30:47 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.219 11:30:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:44.219 ************************************ 00:06:44.219 START TEST spdk_dd_malloc 00:06:44.219 ************************************ 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:44.219 * Looking for test storage... 00:06:44.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:44.219 ************************************ 00:06:44.219 START TEST dd_malloc_copy 00:06:44.219 ************************************ 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:44.219 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:44.478 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:44.478 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:44.478 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:44.478 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:44.478 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:44.478 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:44.478 11:30:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.478 [2024-07-12 11:30:47.721794] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:44.478 [2024-07-12 11:30:47.722055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63603 ] 00:06:44.478 { 00:06:44.478 "subsystems": [ 00:06:44.478 { 00:06:44.478 "subsystem": "bdev", 00:06:44.478 "config": [ 00:06:44.478 { 00:06:44.478 "params": { 00:06:44.478 "block_size": 512, 00:06:44.478 "num_blocks": 1048576, 00:06:44.478 "name": "malloc0" 00:06:44.478 }, 00:06:44.478 "method": "bdev_malloc_create" 00:06:44.478 }, 00:06:44.478 { 00:06:44.478 "params": { 00:06:44.478 "block_size": 512, 00:06:44.478 "num_blocks": 1048576, 00:06:44.478 "name": "malloc1" 00:06:44.478 }, 00:06:44.478 "method": "bdev_malloc_create" 00:06:44.478 }, 00:06:44.478 { 00:06:44.478 "method": "bdev_wait_for_examine" 00:06:44.478 } 00:06:44.478 ] 00:06:44.478 } 00:06:44.478 ] 00:06:44.478 } 00:06:44.478 [2024-07-12 11:30:47.861661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.737 [2024-07-12 11:30:47.967880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.737 [2024-07-12 11:30:48.024838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.180  Copying: 210/512 [MB] (210 MBps) Copying: 420/512 [MB] (209 MBps) Copying: 512/512 [MB] (average 210 MBps) 00:06:48.180 00:06:48.180 11:30:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:48.180 11:30:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:48.180 11:30:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:48.180 11:30:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.180 [2024-07-12 11:30:51.476695] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:48.180 [2024-07-12 11:30:51.476963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63650 ] 00:06:48.180 { 00:06:48.180 "subsystems": [ 00:06:48.180 { 00:06:48.180 "subsystem": "bdev", 00:06:48.180 "config": [ 00:06:48.180 { 00:06:48.180 "params": { 00:06:48.180 "block_size": 512, 00:06:48.180 "num_blocks": 1048576, 00:06:48.180 "name": "malloc0" 00:06:48.180 }, 00:06:48.180 "method": "bdev_malloc_create" 00:06:48.180 }, 00:06:48.180 { 00:06:48.180 "params": { 00:06:48.180 "block_size": 512, 00:06:48.180 "num_blocks": 1048576, 00:06:48.180 "name": "malloc1" 00:06:48.180 }, 00:06:48.180 "method": "bdev_malloc_create" 00:06:48.180 }, 00:06:48.180 { 00:06:48.180 "method": "bdev_wait_for_examine" 00:06:48.180 } 00:06:48.180 ] 00:06:48.180 } 00:06:48.180 ] 00:06:48.180 } 00:06:48.180 [2024-07-12 11:30:51.616771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.439 [2024-07-12 11:30:51.702726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.439 [2024-07-12 11:30:51.757858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.879  Copying: 206/512 [MB] (206 MBps) Copying: 416/512 [MB] (209 MBps) Copying: 512/512 [MB] (average 207 MBps) 00:06:51.879 00:06:51.879 00:06:51.879 ************************************ 00:06:51.879 END TEST dd_malloc_copy 00:06:51.879 ************************************ 00:06:51.879 real 0m7.510s 00:06:51.879 user 0m6.501s 00:06:51.879 sys 0m0.840s 00:06:51.879 11:30:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.879 11:30:55 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:51.879 11:30:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:06:51.879 ************************************ 00:06:51.879 END TEST spdk_dd_malloc 00:06:51.879 ************************************ 00:06:51.879 00:06:51.879 real 0m7.643s 00:06:51.879 user 0m6.554s 00:06:51.879 sys 0m0.917s 00:06:51.879 11:30:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.879 11:30:55 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:51.879 11:30:55 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:51.879 11:30:55 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:51.879 11:30:55 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:51.879 11:30:55 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.879 11:30:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:51.879 ************************************ 00:06:51.879 START TEST spdk_dd_bdev_to_bdev 00:06:51.879 ************************************ 00:06:51.879 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:52.211 * Looking for test storage... 00:06:52.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.211 ************************************ 00:06:52.211 START TEST dd_inflate_file 00:06:52.211 ************************************ 00:06:52.211 11:30:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:52.211 [2024-07-12 11:30:55.426693] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:52.211 [2024-07-12 11:30:55.426807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63760 ] 00:06:52.211 [2024-07-12 11:30:55.566952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.469 [2024-07-12 11:30:55.684225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.469 [2024-07-12 11:30:55.739273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.728  Copying: 64/64 [MB] (average 1641 MBps) 00:06:52.728 00:06:52.728 00:06:52.728 real 0m0.663s 00:06:52.728 user 0m0.400s 00:06:52.728 sys 0m0.311s 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:52.728 ************************************ 00:06:52.728 END TEST dd_inflate_file 00:06:52.728 ************************************ 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.728 ************************************ 00:06:52.728 START TEST dd_copy_to_out_bdev 00:06:52.728 ************************************ 00:06:52.728 11:30:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:52.728 { 00:06:52.728 "subsystems": [ 00:06:52.728 { 00:06:52.728 "subsystem": "bdev", 00:06:52.728 "config": [ 00:06:52.728 { 00:06:52.728 "params": { 00:06:52.728 "trtype": "pcie", 00:06:52.728 "traddr": "0000:00:10.0", 00:06:52.728 "name": "Nvme0" 00:06:52.728 }, 00:06:52.728 "method": "bdev_nvme_attach_controller" 00:06:52.728 }, 00:06:52.728 { 00:06:52.728 "params": { 00:06:52.728 "trtype": "pcie", 00:06:52.728 "traddr": "0000:00:11.0", 00:06:52.728 "name": "Nvme1" 00:06:52.728 }, 00:06:52.728 "method": "bdev_nvme_attach_controller" 00:06:52.728 }, 00:06:52.728 { 00:06:52.728 "method": "bdev_wait_for_examine" 00:06:52.728 } 00:06:52.728 ] 00:06:52.728 } 00:06:52.728 ] 00:06:52.728 } 00:06:52.987 [2024-07-12 11:30:56.225447] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:52.987 [2024-07-12 11:30:56.225667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63794 ] 00:06:52.987 [2024-07-12 11:30:56.365548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.245 [2024-07-12 11:30:56.486341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.245 [2024-07-12 11:30:56.541573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.619  Copying: 63/64 [MB] (63 MBps) Copying: 64/64 [MB] (average 62 MBps) 00:06:54.619 00:06:54.619 00:06:54.619 real 0m1.916s 00:06:54.619 user 0m1.664s 00:06:54.619 sys 0m1.385s 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:54.619 ************************************ 00:06:54.619 END TEST dd_copy_to_out_bdev 00:06:54.619 ************************************ 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:54.619 ************************************ 00:06:54.619 START TEST dd_offset_magic 00:06:54.619 ************************************ 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:54.619 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:54.878 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:54.878 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:54.878 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:54.878 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:54.878 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:54.878 [2024-07-12 11:30:58.109953] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:54.878 [2024-07-12 11:30:58.110247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63839 ] 00:06:54.878 { 00:06:54.878 "subsystems": [ 00:06:54.878 { 00:06:54.878 "subsystem": "bdev", 00:06:54.878 "config": [ 00:06:54.878 { 00:06:54.878 "params": { 00:06:54.878 "trtype": "pcie", 00:06:54.878 "traddr": "0000:00:10.0", 00:06:54.878 "name": "Nvme0" 00:06:54.878 }, 00:06:54.878 "method": "bdev_nvme_attach_controller" 00:06:54.878 }, 00:06:54.878 { 00:06:54.878 "params": { 00:06:54.878 "trtype": "pcie", 00:06:54.878 "traddr": "0000:00:11.0", 00:06:54.878 "name": "Nvme1" 00:06:54.878 }, 00:06:54.878 "method": "bdev_nvme_attach_controller" 00:06:54.878 }, 00:06:54.878 { 00:06:54.878 "method": "bdev_wait_for_examine" 00:06:54.878 } 00:06:54.878 ] 00:06:54.878 } 00:06:54.878 ] 00:06:54.878 } 00:06:54.878 [2024-07-12 11:30:58.245234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.136 [2024-07-12 11:30:58.362325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.136 [2024-07-12 11:30:58.416661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.659  Copying: 65/65 [MB] (average 1015 MBps) 00:06:55.659 00:06:55.659 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:55.659 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:55.659 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:55.659 11:30:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:55.659 [2024-07-12 11:30:58.976784] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:55.659 [2024-07-12 11:30:58.977181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63859 ] 00:06:55.659 { 00:06:55.659 "subsystems": [ 00:06:55.659 { 00:06:55.659 "subsystem": "bdev", 00:06:55.659 "config": [ 00:06:55.659 { 00:06:55.659 "params": { 00:06:55.659 "trtype": "pcie", 00:06:55.659 "traddr": "0000:00:10.0", 00:06:55.659 "name": "Nvme0" 00:06:55.659 }, 00:06:55.659 "method": "bdev_nvme_attach_controller" 00:06:55.659 }, 00:06:55.659 { 00:06:55.659 "params": { 00:06:55.659 "trtype": "pcie", 00:06:55.659 "traddr": "0000:00:11.0", 00:06:55.659 "name": "Nvme1" 00:06:55.659 }, 00:06:55.659 "method": "bdev_nvme_attach_controller" 00:06:55.659 }, 00:06:55.659 { 00:06:55.659 "method": "bdev_wait_for_examine" 00:06:55.659 } 00:06:55.659 ] 00:06:55.659 } 00:06:55.659 ] 00:06:55.659 } 00:06:55.918 [2024-07-12 11:30:59.112597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.918 [2024-07-12 11:30:59.225248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.918 [2024-07-12 11:30:59.278035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.433  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:56.433 00:06:56.433 11:30:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:56.433 11:30:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:56.433 11:30:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:56.433 11:30:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:56.433 11:30:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:56.433 11:30:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:56.433 11:30:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:56.433 [2024-07-12 11:30:59.742989] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:56.433 [2024-07-12 11:30:59.743354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63881 ] 00:06:56.433 { 00:06:56.433 "subsystems": [ 00:06:56.433 { 00:06:56.433 "subsystem": "bdev", 00:06:56.433 "config": [ 00:06:56.433 { 00:06:56.433 "params": { 00:06:56.433 "trtype": "pcie", 00:06:56.433 "traddr": "0000:00:10.0", 00:06:56.433 "name": "Nvme0" 00:06:56.433 }, 00:06:56.433 "method": "bdev_nvme_attach_controller" 00:06:56.433 }, 00:06:56.433 { 00:06:56.433 "params": { 00:06:56.433 "trtype": "pcie", 00:06:56.433 "traddr": "0000:00:11.0", 00:06:56.433 "name": "Nvme1" 00:06:56.433 }, 00:06:56.433 "method": "bdev_nvme_attach_controller" 00:06:56.433 }, 00:06:56.433 { 00:06:56.434 "method": "bdev_wait_for_examine" 00:06:56.434 } 00:06:56.434 ] 00:06:56.434 } 00:06:56.434 ] 00:06:56.434 } 00:06:56.434 [2024-07-12 11:30:59.880597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.692 [2024-07-12 11:30:59.991890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.692 [2024-07-12 11:31:00.045421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.208  Copying: 65/65 [MB] (average 1000 MBps) 00:06:57.208 00:06:57.208 11:31:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:57.208 11:31:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:57.208 11:31:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:57.208 11:31:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:57.208 [2024-07-12 11:31:00.627860] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:57.208 [2024-07-12 11:31:00.628160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63890 ] 00:06:57.208 { 00:06:57.208 "subsystems": [ 00:06:57.208 { 00:06:57.208 "subsystem": "bdev", 00:06:57.208 "config": [ 00:06:57.208 { 00:06:57.208 "params": { 00:06:57.208 "trtype": "pcie", 00:06:57.208 "traddr": "0000:00:10.0", 00:06:57.208 "name": "Nvme0" 00:06:57.208 }, 00:06:57.208 "method": "bdev_nvme_attach_controller" 00:06:57.208 }, 00:06:57.208 { 00:06:57.208 "params": { 00:06:57.208 "trtype": "pcie", 00:06:57.208 "traddr": "0000:00:11.0", 00:06:57.208 "name": "Nvme1" 00:06:57.208 }, 00:06:57.208 "method": "bdev_nvme_attach_controller" 00:06:57.208 }, 00:06:57.208 { 00:06:57.208 "method": "bdev_wait_for_examine" 00:06:57.208 } 00:06:57.208 ] 00:06:57.208 } 00:06:57.208 ] 00:06:57.208 } 00:06:57.466 [2024-07-12 11:31:00.768687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.466 [2024-07-12 11:31:00.882299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.723 [2024-07-12 11:31:00.937950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.981  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:57.981 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:57.981 00:06:57.981 real 0m3.251s 00:06:57.981 user 0m2.404s 00:06:57.981 sys 0m0.915s 00:06:57.981 ************************************ 00:06:57.981 END TEST dd_offset_magic 00:06:57.981 ************************************ 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:57.981 11:31:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:57.981 [2024-07-12 11:31:01.408086] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:57.982 [2024-07-12 11:31:01.408177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63927 ] 00:06:57.982 { 00:06:57.982 "subsystems": [ 00:06:57.982 { 00:06:57.982 "subsystem": "bdev", 00:06:57.982 "config": [ 00:06:57.982 { 00:06:57.982 "params": { 00:06:57.982 "trtype": "pcie", 00:06:57.982 "traddr": "0000:00:10.0", 00:06:57.982 "name": "Nvme0" 00:06:57.982 }, 00:06:57.982 "method": "bdev_nvme_attach_controller" 00:06:57.982 }, 00:06:57.982 { 00:06:57.982 "params": { 00:06:57.982 "trtype": "pcie", 00:06:57.982 "traddr": "0000:00:11.0", 00:06:57.982 "name": "Nvme1" 00:06:57.982 }, 00:06:57.982 "method": "bdev_nvme_attach_controller" 00:06:57.982 }, 00:06:57.982 { 00:06:57.982 "method": "bdev_wait_for_examine" 00:06:57.982 } 00:06:57.982 ] 00:06:57.982 } 00:06:57.982 ] 00:06:57.982 } 00:06:58.240 [2024-07-12 11:31:01.540406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.240 [2024-07-12 11:31:01.627781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.240 [2024-07-12 11:31:01.682205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.758  Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:58.758 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:58.758 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.758 [2024-07-12 11:31:02.134226] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:58.758 [2024-07-12 11:31:02.134603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63948 ] 00:06:58.758 { 00:06:58.758 "subsystems": [ 00:06:58.758 { 00:06:58.758 "subsystem": "bdev", 00:06:58.758 "config": [ 00:06:58.758 { 00:06:58.758 "params": { 00:06:58.758 "trtype": "pcie", 00:06:58.758 "traddr": "0000:00:10.0", 00:06:58.758 "name": "Nvme0" 00:06:58.758 }, 00:06:58.758 "method": "bdev_nvme_attach_controller" 00:06:58.758 }, 00:06:58.758 { 00:06:58.758 "params": { 00:06:58.758 "trtype": "pcie", 00:06:58.758 "traddr": "0000:00:11.0", 00:06:58.758 "name": "Nvme1" 00:06:58.758 }, 00:06:58.758 "method": "bdev_nvme_attach_controller" 00:06:58.758 }, 00:06:58.758 { 00:06:58.758 "method": "bdev_wait_for_examine" 00:06:58.758 } 00:06:58.758 ] 00:06:58.758 } 00:06:58.758 ] 00:06:58.758 } 00:06:59.016 [2024-07-12 11:31:02.272625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.016 [2024-07-12 11:31:02.386193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.016 [2024-07-12 11:31:02.440834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.531  Copying: 5120/5120 [kB] (average 833 MBps) 00:06:59.531 00:06:59.531 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:59.531 00:06:59.531 real 0m7.610s 00:06:59.531 user 0m5.646s 00:06:59.531 sys 0m3.302s 00:06:59.531 ************************************ 00:06:59.531 END TEST spdk_dd_bdev_to_bdev 00:06:59.531 ************************************ 00:06:59.531 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.531 11:31:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.531 11:31:02 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:59.531 11:31:02 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:59.531 11:31:02 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:59.531 11:31:02 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.531 11:31:02 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.531 11:31:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:59.531 ************************************ 00:06:59.531 START TEST spdk_dd_uring 00:06:59.531 ************************************ 00:06:59.531 11:31:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:59.789 * Looking for test storage... 00:06:59.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:59.789 ************************************ 00:06:59.789 START TEST dd_uring_copy 00:06:59.789 ************************************ 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:59.789 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=qrrsitbui95pdrvbxqpucx92c47ep45l8ontvog0f0kue2zuh03kqhyv8kf3w1bu9mw5zmerbfarsvijvrz4mdxp3rl8w1i7d5qqn623u1b1j2jh3gzwh2lyx6hkol8if1icl26n1amk7d1g9lsze1gzb1edvvn7li29vt1g9rm3bz1p0f13s11mr5qny6yfzd1inh0t0xyppw1hq2h5tq534bxtx1evslugtzo3ebdqojadajhd8yfsvgp2l661pe27vy9mjfjxz235o7t4cz1gu90jyjua1t9ncr14sjvz3l5bdyn8zaeegk83osempfnvnibnn6uebnbsqz4byashjd1370st14k123uhdgoq2kmhajddi4vvhd3lj5ucefop2bmqgypxf0hj3h2k5240vk979u72adwa4fdl16xwp4vqqb1lncpt2kb6xhcpj05ouw2j0tlumtnaudw901rq3r4hbiu64dcug42q7r55ak47pgab7v9xa40et1309z57njkfnjvukcqb8iye6nj10twly6acesh9v3zyt2kytoplid99cq5m0hdm7p5mqn6r06mvb78n75j47k32up45cc63sjgd6z1bd4n4a9k3hi7ag6w2kdaldpz1knjdx7s7h5do4o85ivw1unhaqus9hrh0cuobh4hpbu9t5bu5zcxnzr3li1lq3ehefybw89vll04st1415ajhmam1k69gnvnrze4qd4aheslkdacxwgu8lghmyaidwf33mu3h4svxkja5sktnjj8ihblhhkkgk314oo5nof9dxdytgfv8yypgamkhj3ckwnircq8l9b6aac2znr0ig0rim8983y5yx044k9vnrjewsix7vqgo2v2v093a0imptcfec17fkn908mqigt8mm14orpwzqle9j2vuego4dwzgw3yjv9uqwdsr6d6nzjocecjbq0woodv7cl0vtb5xkacxpn5s133cpzl9u7a0mymlpnkwpksk2pxbynuosxksv0obnejp 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo qrrsitbui95pdrvbxqpucx92c47ep45l8ontvog0f0kue2zuh03kqhyv8kf3w1bu9mw5zmerbfarsvijvrz4mdxp3rl8w1i7d5qqn623u1b1j2jh3gzwh2lyx6hkol8if1icl26n1amk7d1g9lsze1gzb1edvvn7li29vt1g9rm3bz1p0f13s11mr5qny6yfzd1inh0t0xyppw1hq2h5tq534bxtx1evslugtzo3ebdqojadajhd8yfsvgp2l661pe27vy9mjfjxz235o7t4cz1gu90jyjua1t9ncr14sjvz3l5bdyn8zaeegk83osempfnvnibnn6uebnbsqz4byashjd1370st14k123uhdgoq2kmhajddi4vvhd3lj5ucefop2bmqgypxf0hj3h2k5240vk979u72adwa4fdl16xwp4vqqb1lncpt2kb6xhcpj05ouw2j0tlumtnaudw901rq3r4hbiu64dcug42q7r55ak47pgab7v9xa40et1309z57njkfnjvukcqb8iye6nj10twly6acesh9v3zyt2kytoplid99cq5m0hdm7p5mqn6r06mvb78n75j47k32up45cc63sjgd6z1bd4n4a9k3hi7ag6w2kdaldpz1knjdx7s7h5do4o85ivw1unhaqus9hrh0cuobh4hpbu9t5bu5zcxnzr3li1lq3ehefybw89vll04st1415ajhmam1k69gnvnrze4qd4aheslkdacxwgu8lghmyaidwf33mu3h4svxkja5sktnjj8ihblhhkkgk314oo5nof9dxdytgfv8yypgamkhj3ckwnircq8l9b6aac2znr0ig0rim8983y5yx044k9vnrjewsix7vqgo2v2v093a0imptcfec17fkn908mqigt8mm14orpwzqle9j2vuego4dwzgw3yjv9uqwdsr6d6nzjocecjbq0woodv7cl0vtb5xkacxpn5s133cpzl9u7a0mymlpnkwpksk2pxbynuosxksv0obnejp 00:06:59.790 11:31:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:59.790 [2024-07-12 11:31:03.115043] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:06:59.790 [2024-07-12 11:31:03.115156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64018 ] 00:07:00.048 [2024-07-12 11:31:03.251371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.048 [2024-07-12 11:31:03.367878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.048 [2024-07-12 11:31:03.421697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.199  Copying: 511/511 [MB] (average 1233 MBps) 00:07:01.199 00:07:01.199 11:31:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:01.199 11:31:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:01.199 11:31:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.199 11:31:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.199 { 00:07:01.199 "subsystems": [ 00:07:01.199 { 00:07:01.199 "subsystem": "bdev", 00:07:01.199 "config": [ 00:07:01.199 { 00:07:01.199 "params": { 00:07:01.199 "block_size": 512, 00:07:01.199 "num_blocks": 1048576, 00:07:01.199 "name": "malloc0" 00:07:01.199 }, 00:07:01.199 "method": "bdev_malloc_create" 00:07:01.199 }, 00:07:01.199 { 00:07:01.199 "params": { 00:07:01.199 "filename": "/dev/zram1", 00:07:01.199 "name": "uring0" 00:07:01.199 }, 00:07:01.199 "method": "bdev_uring_create" 00:07:01.199 }, 00:07:01.199 { 00:07:01.199 "method": "bdev_wait_for_examine" 00:07:01.199 } 00:07:01.199 ] 00:07:01.199 } 00:07:01.199 ] 00:07:01.199 } 00:07:01.199 [2024-07-12 11:31:04.529192] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:01.199 [2024-07-12 11:31:04.529346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64039 ] 00:07:01.458 [2024-07-12 11:31:04.676839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.458 [2024-07-12 11:31:04.796427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.458 [2024-07-12 11:31:04.852155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.584  Copying: 206/512 [MB] (206 MBps) Copying: 408/512 [MB] (202 MBps) Copying: 512/512 [MB] (average 205 MBps) 00:07:04.584 00:07:04.584 11:31:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:04.584 11:31:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:04.584 11:31:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:04.584 11:31:07 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:04.584 [2024-07-12 11:31:08.006026] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:04.584 [2024-07-12 11:31:08.006123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64089 ] 00:07:04.584 { 00:07:04.584 "subsystems": [ 00:07:04.584 { 00:07:04.584 "subsystem": "bdev", 00:07:04.584 "config": [ 00:07:04.584 { 00:07:04.584 "params": { 00:07:04.584 "block_size": 512, 00:07:04.584 "num_blocks": 1048576, 00:07:04.584 "name": "malloc0" 00:07:04.584 }, 00:07:04.584 "method": "bdev_malloc_create" 00:07:04.584 }, 00:07:04.584 { 00:07:04.584 "params": { 00:07:04.584 "filename": "/dev/zram1", 00:07:04.584 "name": "uring0" 00:07:04.584 }, 00:07:04.584 "method": "bdev_uring_create" 00:07:04.584 }, 00:07:04.584 { 00:07:04.584 "method": "bdev_wait_for_examine" 00:07:04.584 } 00:07:04.584 ] 00:07:04.584 } 00:07:04.584 ] 00:07:04.584 } 00:07:04.841 [2024-07-12 11:31:08.145610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.841 [2024-07-12 11:31:08.257113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.098 [2024-07-12 11:31:08.309993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.231  Copying: 187/512 [MB] (187 MBps) Copying: 374/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 189 MBps) 00:07:08.231 00:07:08.231 11:31:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:08.231 11:31:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ qrrsitbui95pdrvbxqpucx92c47ep45l8ontvog0f0kue2zuh03kqhyv8kf3w1bu9mw5zmerbfarsvijvrz4mdxp3rl8w1i7d5qqn623u1b1j2jh3gzwh2lyx6hkol8if1icl26n1amk7d1g9lsze1gzb1edvvn7li29vt1g9rm3bz1p0f13s11mr5qny6yfzd1inh0t0xyppw1hq2h5tq534bxtx1evslugtzo3ebdqojadajhd8yfsvgp2l661pe27vy9mjfjxz235o7t4cz1gu90jyjua1t9ncr14sjvz3l5bdyn8zaeegk83osempfnvnibnn6uebnbsqz4byashjd1370st14k123uhdgoq2kmhajddi4vvhd3lj5ucefop2bmqgypxf0hj3h2k5240vk979u72adwa4fdl16xwp4vqqb1lncpt2kb6xhcpj05ouw2j0tlumtnaudw901rq3r4hbiu64dcug42q7r55ak47pgab7v9xa40et1309z57njkfnjvukcqb8iye6nj10twly6acesh9v3zyt2kytoplid99cq5m0hdm7p5mqn6r06mvb78n75j47k32up45cc63sjgd6z1bd4n4a9k3hi7ag6w2kdaldpz1knjdx7s7h5do4o85ivw1unhaqus9hrh0cuobh4hpbu9t5bu5zcxnzr3li1lq3ehefybw89vll04st1415ajhmam1k69gnvnrze4qd4aheslkdacxwgu8lghmyaidwf33mu3h4svxkja5sktnjj8ihblhhkkgk314oo5nof9dxdytgfv8yypgamkhj3ckwnircq8l9b6aac2znr0ig0rim8983y5yx044k9vnrjewsix7vqgo2v2v093a0imptcfec17fkn908mqigt8mm14orpwzqle9j2vuego4dwzgw3yjv9uqwdsr6d6nzjocecjbq0woodv7cl0vtb5xkacxpn5s133cpzl9u7a0mymlpnkwpksk2pxbynuosxksv0obnejp == \q\r\r\s\i\t\b\u\i\9\5\p\d\r\v\b\x\q\p\u\c\x\9\2\c\4\7\e\p\4\5\l\8\o\n\t\v\o\g\0\f\0\k\u\e\2\z\u\h\0\3\k\q\h\y\v\8\k\f\3\w\1\b\u\9\m\w\5\z\m\e\r\b\f\a\r\s\v\i\j\v\r\z\4\m\d\x\p\3\r\l\8\w\1\i\7\d\5\q\q\n\6\2\3\u\1\b\1\j\2\j\h\3\g\z\w\h\2\l\y\x\6\h\k\o\l\8\i\f\1\i\c\l\2\6\n\1\a\m\k\7\d\1\g\9\l\s\z\e\1\g\z\b\1\e\d\v\v\n\7\l\i\2\9\v\t\1\g\9\r\m\3\b\z\1\p\0\f\1\3\s\1\1\m\r\5\q\n\y\6\y\f\z\d\1\i\n\h\0\t\0\x\y\p\p\w\1\h\q\2\h\5\t\q\5\3\4\b\x\t\x\1\e\v\s\l\u\g\t\z\o\3\e\b\d\q\o\j\a\d\a\j\h\d\8\y\f\s\v\g\p\2\l\6\6\1\p\e\2\7\v\y\9\m\j\f\j\x\z\2\3\5\o\7\t\4\c\z\1\g\u\9\0\j\y\j\u\a\1\t\9\n\c\r\1\4\s\j\v\z\3\l\5\b\d\y\n\8\z\a\e\e\g\k\8\3\o\s\e\m\p\f\n\v\n\i\b\n\n\6\u\e\b\n\b\s\q\z\4\b\y\a\s\h\j\d\1\3\7\0\s\t\1\4\k\1\2\3\u\h\d\g\o\q\2\k\m\h\a\j\d\d\i\4\v\v\h\d\3\l\j\5\u\c\e\f\o\p\2\b\m\q\g\y\p\x\f\0\h\j\3\h\2\k\5\2\4\0\v\k\9\7\9\u\7\2\a\d\w\a\4\f\d\l\1\6\x\w\p\4\v\q\q\b\1\l\n\c\p\t\2\k\b\6\x\h\c\p\j\0\5\o\u\w\2\j\0\t\l\u\m\t\n\a\u\d\w\9\0\1\r\q\3\r\4\h\b\i\u\6\4\d\c\u\g\4\2\q\7\r\5\5\a\k\4\7\p\g\a\b\7\v\9\x\a\4\0\e\t\1\3\0\9\z\5\7\n\j\k\f\n\j\v\u\k\c\q\b\8\i\y\e\6\n\j\1\0\t\w\l\y\6\a\c\e\s\h\9\v\3\z\y\t\2\k\y\t\o\p\l\i\d\9\9\c\q\5\m\0\h\d\m\7\p\5\m\q\n\6\r\0\6\m\v\b\7\8\n\7\5\j\4\7\k\3\2\u\p\4\5\c\c\6\3\s\j\g\d\6\z\1\b\d\4\n\4\a\9\k\3\h\i\7\a\g\6\w\2\k\d\a\l\d\p\z\1\k\n\j\d\x\7\s\7\h\5\d\o\4\o\8\5\i\v\w\1\u\n\h\a\q\u\s\9\h\r\h\0\c\u\o\b\h\4\h\p\b\u\9\t\5\b\u\5\z\c\x\n\z\r\3\l\i\1\l\q\3\e\h\e\f\y\b\w\8\9\v\l\l\0\4\s\t\1\4\1\5\a\j\h\m\a\m\1\k\6\9\g\n\v\n\r\z\e\4\q\d\4\a\h\e\s\l\k\d\a\c\x\w\g\u\8\l\g\h\m\y\a\i\d\w\f\3\3\m\u\3\h\4\s\v\x\k\j\a\5\s\k\t\n\j\j\8\i\h\b\l\h\h\k\k\g\k\3\1\4\o\o\5\n\o\f\9\d\x\d\y\t\g\f\v\8\y\y\p\g\a\m\k\h\j\3\c\k\w\n\i\r\c\q\8\l\9\b\6\a\a\c\2\z\n\r\0\i\g\0\r\i\m\8\9\8\3\y\5\y\x\0\4\4\k\9\v\n\r\j\e\w\s\i\x\7\v\q\g\o\2\v\2\v\0\9\3\a\0\i\m\p\t\c\f\e\c\1\7\f\k\n\9\0\8\m\q\i\g\t\8\m\m\1\4\o\r\p\w\z\q\l\e\9\j\2\v\u\e\g\o\4\d\w\z\g\w\3\y\j\v\9\u\q\w\d\s\r\6\d\6\n\z\j\o\c\e\c\j\b\q\0\w\o\o\d\v\7\c\l\0\v\t\b\5\x\k\a\c\x\p\n\5\s\1\3\3\c\p\z\l\9\u\7\a\0\m\y\m\l\p\n\k\w\p\k\s\k\2\p\x\b\y\n\u\o\s\x\k\s\v\0\o\b\n\e\j\p ]] 00:07:08.231 11:31:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:08.231 11:31:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ qrrsitbui95pdrvbxqpucx92c47ep45l8ontvog0f0kue2zuh03kqhyv8kf3w1bu9mw5zmerbfarsvijvrz4mdxp3rl8w1i7d5qqn623u1b1j2jh3gzwh2lyx6hkol8if1icl26n1amk7d1g9lsze1gzb1edvvn7li29vt1g9rm3bz1p0f13s11mr5qny6yfzd1inh0t0xyppw1hq2h5tq534bxtx1evslugtzo3ebdqojadajhd8yfsvgp2l661pe27vy9mjfjxz235o7t4cz1gu90jyjua1t9ncr14sjvz3l5bdyn8zaeegk83osempfnvnibnn6uebnbsqz4byashjd1370st14k123uhdgoq2kmhajddi4vvhd3lj5ucefop2bmqgypxf0hj3h2k5240vk979u72adwa4fdl16xwp4vqqb1lncpt2kb6xhcpj05ouw2j0tlumtnaudw901rq3r4hbiu64dcug42q7r55ak47pgab7v9xa40et1309z57njkfnjvukcqb8iye6nj10twly6acesh9v3zyt2kytoplid99cq5m0hdm7p5mqn6r06mvb78n75j47k32up45cc63sjgd6z1bd4n4a9k3hi7ag6w2kdaldpz1knjdx7s7h5do4o85ivw1unhaqus9hrh0cuobh4hpbu9t5bu5zcxnzr3li1lq3ehefybw89vll04st1415ajhmam1k69gnvnrze4qd4aheslkdacxwgu8lghmyaidwf33mu3h4svxkja5sktnjj8ihblhhkkgk314oo5nof9dxdytgfv8yypgamkhj3ckwnircq8l9b6aac2znr0ig0rim8983y5yx044k9vnrjewsix7vqgo2v2v093a0imptcfec17fkn908mqigt8mm14orpwzqle9j2vuego4dwzgw3yjv9uqwdsr6d6nzjocecjbq0woodv7cl0vtb5xkacxpn5s133cpzl9u7a0mymlpnkwpksk2pxbynuosxksv0obnejp == \q\r\r\s\i\t\b\u\i\9\5\p\d\r\v\b\x\q\p\u\c\x\9\2\c\4\7\e\p\4\5\l\8\o\n\t\v\o\g\0\f\0\k\u\e\2\z\u\h\0\3\k\q\h\y\v\8\k\f\3\w\1\b\u\9\m\w\5\z\m\e\r\b\f\a\r\s\v\i\j\v\r\z\4\m\d\x\p\3\r\l\8\w\1\i\7\d\5\q\q\n\6\2\3\u\1\b\1\j\2\j\h\3\g\z\w\h\2\l\y\x\6\h\k\o\l\8\i\f\1\i\c\l\2\6\n\1\a\m\k\7\d\1\g\9\l\s\z\e\1\g\z\b\1\e\d\v\v\n\7\l\i\2\9\v\t\1\g\9\r\m\3\b\z\1\p\0\f\1\3\s\1\1\m\r\5\q\n\y\6\y\f\z\d\1\i\n\h\0\t\0\x\y\p\p\w\1\h\q\2\h\5\t\q\5\3\4\b\x\t\x\1\e\v\s\l\u\g\t\z\o\3\e\b\d\q\o\j\a\d\a\j\h\d\8\y\f\s\v\g\p\2\l\6\6\1\p\e\2\7\v\y\9\m\j\f\j\x\z\2\3\5\o\7\t\4\c\z\1\g\u\9\0\j\y\j\u\a\1\t\9\n\c\r\1\4\s\j\v\z\3\l\5\b\d\y\n\8\z\a\e\e\g\k\8\3\o\s\e\m\p\f\n\v\n\i\b\n\n\6\u\e\b\n\b\s\q\z\4\b\y\a\s\h\j\d\1\3\7\0\s\t\1\4\k\1\2\3\u\h\d\g\o\q\2\k\m\h\a\j\d\d\i\4\v\v\h\d\3\l\j\5\u\c\e\f\o\p\2\b\m\q\g\y\p\x\f\0\h\j\3\h\2\k\5\2\4\0\v\k\9\7\9\u\7\2\a\d\w\a\4\f\d\l\1\6\x\w\p\4\v\q\q\b\1\l\n\c\p\t\2\k\b\6\x\h\c\p\j\0\5\o\u\w\2\j\0\t\l\u\m\t\n\a\u\d\w\9\0\1\r\q\3\r\4\h\b\i\u\6\4\d\c\u\g\4\2\q\7\r\5\5\a\k\4\7\p\g\a\b\7\v\9\x\a\4\0\e\t\1\3\0\9\z\5\7\n\j\k\f\n\j\v\u\k\c\q\b\8\i\y\e\6\n\j\1\0\t\w\l\y\6\a\c\e\s\h\9\v\3\z\y\t\2\k\y\t\o\p\l\i\d\9\9\c\q\5\m\0\h\d\m\7\p\5\m\q\n\6\r\0\6\m\v\b\7\8\n\7\5\j\4\7\k\3\2\u\p\4\5\c\c\6\3\s\j\g\d\6\z\1\b\d\4\n\4\a\9\k\3\h\i\7\a\g\6\w\2\k\d\a\l\d\p\z\1\k\n\j\d\x\7\s\7\h\5\d\o\4\o\8\5\i\v\w\1\u\n\h\a\q\u\s\9\h\r\h\0\c\u\o\b\h\4\h\p\b\u\9\t\5\b\u\5\z\c\x\n\z\r\3\l\i\1\l\q\3\e\h\e\f\y\b\w\8\9\v\l\l\0\4\s\t\1\4\1\5\a\j\h\m\a\m\1\k\6\9\g\n\v\n\r\z\e\4\q\d\4\a\h\e\s\l\k\d\a\c\x\w\g\u\8\l\g\h\m\y\a\i\d\w\f\3\3\m\u\3\h\4\s\v\x\k\j\a\5\s\k\t\n\j\j\8\i\h\b\l\h\h\k\k\g\k\3\1\4\o\o\5\n\o\f\9\d\x\d\y\t\g\f\v\8\y\y\p\g\a\m\k\h\j\3\c\k\w\n\i\r\c\q\8\l\9\b\6\a\a\c\2\z\n\r\0\i\g\0\r\i\m\8\9\8\3\y\5\y\x\0\4\4\k\9\v\n\r\j\e\w\s\i\x\7\v\q\g\o\2\v\2\v\0\9\3\a\0\i\m\p\t\c\f\e\c\1\7\f\k\n\9\0\8\m\q\i\g\t\8\m\m\1\4\o\r\p\w\z\q\l\e\9\j\2\v\u\e\g\o\4\d\w\z\g\w\3\y\j\v\9\u\q\w\d\s\r\6\d\6\n\z\j\o\c\e\c\j\b\q\0\w\o\o\d\v\7\c\l\0\v\t\b\5\x\k\a\c\x\p\n\5\s\1\3\3\c\p\z\l\9\u\7\a\0\m\y\m\l\p\n\k\w\p\k\s\k\2\p\x\b\y\n\u\o\s\x\k\s\v\0\o\b\n\e\j\p ]] 00:07:08.231 11:31:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:08.857 11:31:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:08.857 11:31:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:08.857 11:31:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:08.857 11:31:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:08.857 [2024-07-12 11:31:12.031848] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:08.857 [2024-07-12 11:31:12.032199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64155 ] 00:07:08.857 { 00:07:08.857 "subsystems": [ 00:07:08.857 { 00:07:08.857 "subsystem": "bdev", 00:07:08.857 "config": [ 00:07:08.857 { 00:07:08.857 "params": { 00:07:08.857 "block_size": 512, 00:07:08.857 "num_blocks": 1048576, 00:07:08.857 "name": "malloc0" 00:07:08.857 }, 00:07:08.857 "method": "bdev_malloc_create" 00:07:08.857 }, 00:07:08.857 { 00:07:08.857 "params": { 00:07:08.857 "filename": "/dev/zram1", 00:07:08.857 "name": "uring0" 00:07:08.857 }, 00:07:08.857 "method": "bdev_uring_create" 00:07:08.857 }, 00:07:08.857 { 00:07:08.857 "method": "bdev_wait_for_examine" 00:07:08.857 } 00:07:08.857 ] 00:07:08.857 } 00:07:08.857 ] 00:07:08.857 } 00:07:08.857 [2024-07-12 11:31:12.171859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.857 [2024-07-12 11:31:12.286783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.117 [2024-07-12 11:31:12.342918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.877  Copying: 155/512 [MB] (155 MBps) Copying: 306/512 [MB] (151 MBps) Copying: 463/512 [MB] (156 MBps) Copying: 512/512 [MB] (average 154 MBps) 00:07:12.877 00:07:12.877 11:31:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:12.877 11:31:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:12.877 11:31:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:12.877 11:31:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:12.877 11:31:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:12.877 11:31:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:12.877 11:31:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:12.877 11:31:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.135 [2024-07-12 11:31:16.331543] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:13.135 [2024-07-12 11:31:16.331687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64211 ] 00:07:13.135 { 00:07:13.135 "subsystems": [ 00:07:13.135 { 00:07:13.135 "subsystem": "bdev", 00:07:13.135 "config": [ 00:07:13.135 { 00:07:13.135 "params": { 00:07:13.135 "block_size": 512, 00:07:13.135 "num_blocks": 1048576, 00:07:13.135 "name": "malloc0" 00:07:13.135 }, 00:07:13.135 "method": "bdev_malloc_create" 00:07:13.135 }, 00:07:13.135 { 00:07:13.135 "params": { 00:07:13.135 "filename": "/dev/zram1", 00:07:13.135 "name": "uring0" 00:07:13.135 }, 00:07:13.135 "method": "bdev_uring_create" 00:07:13.135 }, 00:07:13.135 { 00:07:13.135 "params": { 00:07:13.135 "name": "uring0" 00:07:13.135 }, 00:07:13.135 "method": "bdev_uring_delete" 00:07:13.135 }, 00:07:13.135 { 00:07:13.135 "method": "bdev_wait_for_examine" 00:07:13.135 } 00:07:13.135 ] 00:07:13.135 } 00:07:13.135 ] 00:07:13.135 } 00:07:13.135 [2024-07-12 11:31:16.468184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.135 [2024-07-12 11:31:16.580671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.392 [2024-07-12 11:31:16.634669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.958  Copying: 0/0 [B] (average 0 Bps) 00:07:13.958 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:13.958 11:31:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:13.958 [2024-07-12 11:31:17.291056] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:13.958 [2024-07-12 11:31:17.291159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64240 ] 00:07:13.958 { 00:07:13.958 "subsystems": [ 00:07:13.958 { 00:07:13.958 "subsystem": "bdev", 00:07:13.958 "config": [ 00:07:13.958 { 00:07:13.958 "params": { 00:07:13.958 "block_size": 512, 00:07:13.958 "num_blocks": 1048576, 00:07:13.958 "name": "malloc0" 00:07:13.958 }, 00:07:13.958 "method": "bdev_malloc_create" 00:07:13.958 }, 00:07:13.958 { 00:07:13.958 "params": { 00:07:13.958 "filename": "/dev/zram1", 00:07:13.958 "name": "uring0" 00:07:13.958 }, 00:07:13.958 "method": "bdev_uring_create" 00:07:13.958 }, 00:07:13.958 { 00:07:13.958 "params": { 00:07:13.958 "name": "uring0" 00:07:13.958 }, 00:07:13.958 "method": "bdev_uring_delete" 00:07:13.958 }, 00:07:13.958 { 00:07:13.958 "method": "bdev_wait_for_examine" 00:07:13.958 } 00:07:13.958 ] 00:07:13.958 } 00:07:13.958 ] 00:07:13.958 } 00:07:14.217 [2024-07-12 11:31:17.433633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.217 [2024-07-12 11:31:17.546401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.217 [2024-07-12 11:31:17.599152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.475 [2024-07-12 11:31:17.797303] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:14.475 [2024-07-12 11:31:17.797360] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:14.475 [2024-07-12 11:31:17.797373] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:14.475 [2024-07-12 11:31:17.797383] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:14.733 [2024-07-12 11:31:18.102752] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:14.991 00:07:14.991 real 0m15.352s 00:07:14.991 ************************************ 00:07:14.991 END TEST dd_uring_copy 00:07:14.991 ************************************ 00:07:14.991 user 0m10.424s 00:07:14.991 sys 0m12.332s 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.991 11:31:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.992 11:31:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:14.992 00:07:14.992 real 0m15.493s 00:07:14.992 user 0m10.473s 00:07:14.992 sys 0m12.422s 00:07:14.992 11:31:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.992 11:31:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:14.992 ************************************ 00:07:14.992 END TEST spdk_dd_uring 00:07:14.992 ************************************ 00:07:15.250 11:31:18 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:15.250 11:31:18 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:15.250 11:31:18 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.250 11:31:18 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.250 11:31:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:15.250 ************************************ 00:07:15.250 START TEST spdk_dd_sparse 00:07:15.250 ************************************ 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:15.250 * Looking for test storage... 00:07:15.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:15.250 1+0 records in 00:07:15.250 1+0 records out 00:07:15.250 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00691266 s, 607 MB/s 00:07:15.250 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:15.250 1+0 records in 00:07:15.250 1+0 records out 00:07:15.251 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00593419 s, 707 MB/s 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:15.251 1+0 records in 00:07:15.251 1+0 records out 00:07:15.251 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00476501 s, 880 MB/s 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:15.251 ************************************ 00:07:15.251 START TEST dd_sparse_file_to_file 00:07:15.251 ************************************ 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:15.251 11:31:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:15.251 [2024-07-12 11:31:18.639678] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:15.251 [2024-07-12 11:31:18.640021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64326 ] 00:07:15.251 { 00:07:15.251 "subsystems": [ 00:07:15.251 { 00:07:15.251 "subsystem": "bdev", 00:07:15.251 "config": [ 00:07:15.251 { 00:07:15.251 "params": { 00:07:15.251 "block_size": 4096, 00:07:15.251 "filename": "dd_sparse_aio_disk", 00:07:15.251 "name": "dd_aio" 00:07:15.251 }, 00:07:15.251 "method": "bdev_aio_create" 00:07:15.251 }, 00:07:15.251 { 00:07:15.251 "params": { 00:07:15.251 "lvs_name": "dd_lvstore", 00:07:15.251 "bdev_name": "dd_aio" 00:07:15.251 }, 00:07:15.251 "method": "bdev_lvol_create_lvstore" 00:07:15.251 }, 00:07:15.251 { 00:07:15.251 "method": "bdev_wait_for_examine" 00:07:15.251 } 00:07:15.251 ] 00:07:15.251 } 00:07:15.251 ] 00:07:15.251 } 00:07:15.509 [2024-07-12 11:31:18.777687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.509 [2024-07-12 11:31:18.903023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.767 [2024-07-12 11:31:18.960730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.025  Copying: 12/36 [MB] (average 923 MBps) 00:07:16.025 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:16.025 ************************************ 00:07:16.025 END TEST dd_sparse_file_to_file 00:07:16.025 ************************************ 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:16.025 00:07:16.025 real 0m0.750s 00:07:16.025 user 0m0.476s 00:07:16.025 sys 0m0.359s 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 ************************************ 00:07:16.025 START TEST dd_sparse_file_to_bdev 00:07:16.025 ************************************ 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:16.025 11:31:19 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 [2024-07-12 11:31:19.427906] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:16.025 [2024-07-12 11:31:19.428535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64374 ] 00:07:16.025 { 00:07:16.025 "subsystems": [ 00:07:16.025 { 00:07:16.025 "subsystem": "bdev", 00:07:16.025 "config": [ 00:07:16.025 { 00:07:16.025 "params": { 00:07:16.025 "block_size": 4096, 00:07:16.025 "filename": "dd_sparse_aio_disk", 00:07:16.025 "name": "dd_aio" 00:07:16.025 }, 00:07:16.025 "method": "bdev_aio_create" 00:07:16.025 }, 00:07:16.025 { 00:07:16.025 "params": { 00:07:16.025 "lvs_name": "dd_lvstore", 00:07:16.025 "lvol_name": "dd_lvol", 00:07:16.025 "size_in_mib": 36, 00:07:16.025 "thin_provision": true 00:07:16.025 }, 00:07:16.025 "method": "bdev_lvol_create" 00:07:16.025 }, 00:07:16.025 { 00:07:16.025 "method": "bdev_wait_for_examine" 00:07:16.025 } 00:07:16.025 ] 00:07:16.025 } 00:07:16.025 ] 00:07:16.025 } 00:07:16.283 [2024-07-12 11:31:19.564113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.283 [2024-07-12 11:31:19.678260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.541 [2024-07-12 11:31:19.734687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.799  Copying: 12/36 [MB] (average 571 MBps) 00:07:16.799 00:07:16.799 00:07:16.799 real 0m0.697s 00:07:16.799 user 0m0.465s 00:07:16.799 sys 0m0.343s 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:16.799 ************************************ 00:07:16.799 END TEST dd_sparse_file_to_bdev 00:07:16.799 ************************************ 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:16.799 ************************************ 00:07:16.799 START TEST dd_sparse_bdev_to_file 00:07:16.799 ************************************ 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:16.799 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:16.799 [2024-07-12 11:31:20.177356] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:16.799 { 00:07:16.799 "subsystems": [ 00:07:16.799 { 00:07:16.799 "subsystem": "bdev", 00:07:16.799 "config": [ 00:07:16.799 { 00:07:16.799 "params": { 00:07:16.799 "block_size": 4096, 00:07:16.799 "filename": "dd_sparse_aio_disk", 00:07:16.799 "name": "dd_aio" 00:07:16.799 }, 00:07:16.799 "method": "bdev_aio_create" 00:07:16.799 }, 00:07:16.799 { 00:07:16.799 "method": "bdev_wait_for_examine" 00:07:16.799 } 00:07:16.799 ] 00:07:16.799 } 00:07:16.799 ] 00:07:16.799 } 00:07:16.799 [2024-07-12 11:31:20.177994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64412 ] 00:07:17.057 [2024-07-12 11:31:20.318809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.057 [2024-07-12 11:31:20.435296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.057 [2024-07-12 11:31:20.490739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.574  Copying: 12/36 [MB] (average 1000 MBps) 00:07:17.574 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:17.574 ************************************ 00:07:17.574 END TEST dd_sparse_bdev_to_file 00:07:17.574 ************************************ 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:17.574 00:07:17.574 real 0m0.699s 00:07:17.574 user 0m0.443s 00:07:17.574 sys 0m0.345s 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:17.574 ************************************ 00:07:17.574 END TEST spdk_dd_sparse 00:07:17.574 ************************************ 00:07:17.574 00:07:17.574 real 0m2.420s 00:07:17.574 user 0m1.476s 00:07:17.574 sys 0m1.224s 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.574 11:31:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 11:31:20 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:17.574 11:31:20 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:17.574 11:31:20 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.574 11:31:20 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.574 11:31:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 ************************************ 00:07:17.574 START TEST spdk_dd_negative 00:07:17.574 ************************************ 00:07:17.574 11:31:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:17.574 * Looking for test storage... 00:07:17.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:17.574 11:31:21 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.574 11:31:21 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.574 11:31:21 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.574 11:31:21 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.574 11:31:21 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.575 11:31:21 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.575 11:31:21 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.575 11:31:21 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:17.575 11:31:21 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.575 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.575 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.575 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:17.575 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.833 ************************************ 00:07:17.833 START TEST dd_invalid_arguments 00:07:17.833 ************************************ 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.833 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:17.833 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:17.833 00:07:17.833 CPU options: 00:07:17.834 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:17.834 (like [0,1,10]) 00:07:17.834 --lcores lcore to CPU mapping list. The list is in the format: 00:07:17.834 [<,lcores[@CPUs]>...] 00:07:17.834 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:17.834 Within the group, '-' is used for range separator, 00:07:17.834 ',' is used for single number separator. 00:07:17.834 '( )' can be omitted for single element group, 00:07:17.834 '@' can be omitted if cpus and lcores have the same value 00:07:17.834 --disable-cpumask-locks Disable CPU core lock files. 00:07:17.834 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:17.834 pollers in the app support interrupt mode) 00:07:17.834 -p, --main-core main (primary) core for DPDK 00:07:17.834 00:07:17.834 Configuration options: 00:07:17.834 -c, --config, --json JSON config file 00:07:17.834 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:17.834 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:17.834 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:17.834 --rpcs-allowed comma-separated list of permitted RPCS 00:07:17.834 --json-ignore-init-errors don't exit on invalid config entry 00:07:17.834 00:07:17.834 Memory options: 00:07:17.834 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:17.834 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:17.834 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:17.834 -R, --huge-unlink unlink huge files after initialization 00:07:17.834 -n, --mem-channels number of memory channels used for DPDK 00:07:17.834 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:17.834 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:17.834 --no-huge run without using hugepages 00:07:17.834 -i, --shm-id shared memory ID (optional) 00:07:17.834 -g, --single-file-segments force creating just one hugetlbfs file 00:07:17.834 00:07:17.834 PCI options: 00:07:17.834 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:17.834 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:17.834 -u, --no-pci disable PCI access 00:07:17.834 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:17.834 00:07:17.834 Log options: 00:07:17.834 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:17.834 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:17.834 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:17.834 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:17.834 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:17.834 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:17.834 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:17.834 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:17.834 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:17.834 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:17.834 virtio_vfio_user, vmd) 00:07:17.834 --silence-noticelog disable notice level logging to stderr 00:07:17.834 00:07:17.834 Trace options: 00:07:17.834 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:17.834 setting 0 to disable trace (default 32768) 00:07:17.834 Tracepoints vary in size and can use more than one trace entry. 00:07:17.834 -e, --tpoint-group [:] 00:07:17.834 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:17.834 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:17.834 [2024-07-12 11:31:21.095181] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:17.834 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:17.834 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:17.834 a tracepoint group. First tpoint inside a group can be enabled by 00:07:17.834 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:17.834 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:17.834 in /include/spdk_internal/trace_defs.h 00:07:17.834 00:07:17.834 Other options: 00:07:17.834 -h, --help show this usage 00:07:17.834 -v, --version print SPDK version 00:07:17.834 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:17.834 --env-context Opaque context for use of the env implementation 00:07:17.834 00:07:17.834 Application specific: 00:07:17.834 [--------- DD Options ---------] 00:07:17.834 --if Input file. Must specify either --if or --ib. 00:07:17.834 --ib Input bdev. Must specifier either --if or --ib 00:07:17.834 --of Output file. Must specify either --of or --ob. 00:07:17.834 --ob Output bdev. Must specify either --of or --ob. 00:07:17.834 --iflag Input file flags. 00:07:17.834 --oflag Output file flags. 00:07:17.834 --bs I/O unit size (default: 4096) 00:07:17.834 --qd Queue depth (default: 2) 00:07:17.834 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:17.834 --skip Skip this many I/O units at start of input. (default: 0) 00:07:17.834 --seek Skip this many I/O units at start of output. (default: 0) 00:07:17.834 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:17.834 --sparse Enable hole skipping in input target 00:07:17.834 Available iflag and oflag values: 00:07:17.834 append - append mode 00:07:17.834 direct - use direct I/O for data 00:07:17.834 directory - fail unless a directory 00:07:17.834 dsync - use synchronized I/O for data 00:07:17.834 noatime - do not update access time 00:07:17.834 noctty - do not assign controlling terminal from file 00:07:17.834 nofollow - do not follow symlinks 00:07:17.834 nonblock - use non-blocking I/O 00:07:17.834 sync - use synchronized I/O for data and metadata 00:07:17.834 ************************************ 00:07:17.834 END TEST dd_invalid_arguments 00:07:17.834 ************************************ 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.834 00:07:17.834 real 0m0.090s 00:07:17.834 user 0m0.058s 00:07:17.834 sys 0m0.031s 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:17.834 ************************************ 00:07:17.834 START TEST dd_double_input 00:07:17.834 ************************************ 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:17.834 [2024-07-12 11:31:21.225059] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.834 00:07:17.834 real 0m0.077s 00:07:17.834 user 0m0.046s 00:07:17.834 sys 0m0.027s 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.834 11:31:21 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:17.834 ************************************ 00:07:17.834 END TEST dd_double_input 00:07:17.834 ************************************ 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.093 ************************************ 00:07:18.093 START TEST dd_double_output 00:07:18.093 ************************************ 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:18.093 [2024-07-12 11:31:21.350114] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:18.093 ************************************ 00:07:18.093 END TEST dd_double_output 00:07:18.093 ************************************ 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.093 00:07:18.093 real 0m0.075s 00:07:18.093 user 0m0.044s 00:07:18.093 sys 0m0.030s 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.093 ************************************ 00:07:18.093 START TEST dd_no_input 00:07:18.093 ************************************ 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.093 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:18.094 [2024-07-12 11:31:21.474476] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.094 ************************************ 00:07:18.094 END TEST dd_no_input 00:07:18.094 ************************************ 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.094 00:07:18.094 real 0m0.073s 00:07:18.094 user 0m0.043s 00:07:18.094 sys 0m0.029s 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.094 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.352 ************************************ 00:07:18.352 START TEST dd_no_output 00:07:18.352 ************************************ 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.352 [2024-07-12 11:31:21.589564] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:18.352 ************************************ 00:07:18.352 END TEST dd_no_output 00:07:18.352 ************************************ 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.352 00:07:18.352 real 0m0.063s 00:07:18.352 user 0m0.040s 00:07:18.352 sys 0m0.022s 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.352 ************************************ 00:07:18.352 START TEST dd_wrong_blocksize 00:07:18.352 ************************************ 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:18.352 [2024-07-12 11:31:21.703943] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:18.352 ************************************ 00:07:18.352 END TEST dd_wrong_blocksize 00:07:18.352 ************************************ 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.352 00:07:18.352 real 0m0.065s 00:07:18.352 user 0m0.040s 00:07:18.352 sys 0m0.024s 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.352 ************************************ 00:07:18.352 START TEST dd_smaller_blocksize 00:07:18.352 ************************************ 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.352 11:31:21 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:18.610 [2024-07-12 11:31:21.826782] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:18.610 [2024-07-12 11:31:21.826905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64625 ] 00:07:18.610 [2024-07-12 11:31:21.967790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.868 [2024-07-12 11:31:22.097095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.868 [2024-07-12 11:31:22.153996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.125 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:19.125 [2024-07-12 11:31:22.474620] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:19.125 [2024-07-12 11:31:22.474705] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.385 [2024-07-12 11:31:22.589495] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.385 ************************************ 00:07:19.385 END TEST dd_smaller_blocksize 00:07:19.385 ************************************ 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.385 00:07:19.385 real 0m0.927s 00:07:19.385 user 0m0.451s 00:07:19.385 sys 0m0.368s 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.385 ************************************ 00:07:19.385 START TEST dd_invalid_count 00:07:19.385 ************************************ 00:07:19.385 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:19.386 [2024-07-12 11:31:22.804257] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.386 00:07:19.386 real 0m0.073s 00:07:19.386 user 0m0.048s 00:07:19.386 sys 0m0.024s 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.386 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:19.386 ************************************ 00:07:19.386 END TEST dd_invalid_count 00:07:19.386 ************************************ 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.654 ************************************ 00:07:19.654 START TEST dd_invalid_oflag 00:07:19.654 ************************************ 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:19.654 [2024-07-12 11:31:22.937418] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.654 00:07:19.654 real 0m0.083s 00:07:19.654 user 0m0.051s 00:07:19.654 sys 0m0.031s 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:19.654 ************************************ 00:07:19.654 END TEST dd_invalid_oflag 00:07:19.654 ************************************ 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.654 11:31:22 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.654 ************************************ 00:07:19.654 START TEST dd_invalid_iflag 00:07:19.654 ************************************ 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.654 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.655 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.655 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:19.655 [2024-07-12 11:31:23.070191] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:19.655 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:19.655 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.655 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.655 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.655 00:07:19.655 real 0m0.078s 00:07:19.655 user 0m0.039s 00:07:19.655 sys 0m0.037s 00:07:19.655 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.655 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:19.655 ************************************ 00:07:19.655 END TEST dd_invalid_iflag 00:07:19.655 ************************************ 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.913 ************************************ 00:07:19.913 START TEST dd_unknown_flag 00:07:19.913 ************************************ 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.913 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:19.913 [2024-07-12 11:31:23.192001] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:19.913 [2024-07-12 11:31:23.192136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64727 ] 00:07:19.913 [2024-07-12 11:31:23.329328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.171 [2024-07-12 11:31:23.473760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.171 [2024-07-12 11:31:23.529348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.171 [2024-07-12 11:31:23.565965] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:20.171 [2024-07-12 11:31:23.566029] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.171 [2024-07-12 11:31:23.566095] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:20.171 [2024-07-12 11:31:23.566113] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.171 [2024-07-12 11:31:23.566375] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:20.171 [2024-07-12 11:31:23.566396] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.171 [2024-07-12 11:31:23.566454] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:20.171 [2024-07-12 11:31:23.566469] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:20.429 [2024-07-12 11:31:23.682299] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.429 00:07:20.429 real 0m0.648s 00:07:20.429 user 0m0.392s 00:07:20.429 sys 0m0.160s 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.429 ************************************ 00:07:20.429 END TEST dd_unknown_flag 00:07:20.429 ************************************ 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.429 ************************************ 00:07:20.429 START TEST dd_invalid_json 00:07:20.429 ************************************ 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.429 11:31:23 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:20.688 [2024-07-12 11:31:23.881542] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:20.688 [2024-07-12 11:31:23.881671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64751 ] 00:07:20.688 [2024-07-12 11:31:24.015549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.688 [2024-07-12 11:31:24.125784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.688 [2024-07-12 11:31:24.125859] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:20.688 [2024-07-12 11:31:24.125878] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:20.688 [2024-07-12 11:31:24.125888] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.688 [2024-07-12 11:31:24.125926] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.946 00:07:20.946 real 0m0.414s 00:07:20.946 user 0m0.238s 00:07:20.946 sys 0m0.074s 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.946 ************************************ 00:07:20.946 END TEST dd_invalid_json 00:07:20.946 ************************************ 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.946 ************************************ 00:07:20.946 END TEST spdk_dd_negative 00:07:20.946 ************************************ 00:07:20.946 00:07:20.946 real 0m3.349s 00:07:20.946 user 0m1.717s 00:07:20.946 sys 0m1.286s 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.946 11:31:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.946 11:31:24 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:20.946 ************************************ 00:07:20.946 END TEST spdk_dd 00:07:20.946 ************************************ 00:07:20.946 00:07:20.946 real 1m19.622s 00:07:20.946 user 0m52.365s 00:07:20.946 sys 0m33.209s 00:07:20.946 11:31:24 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.946 11:31:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:20.946 11:31:24 -- common/autotest_common.sh@1142 -- # return 0 00:07:20.946 11:31:24 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:20.946 11:31:24 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:20.946 11:31:24 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:20.946 11:31:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:20.946 11:31:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.205 11:31:24 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:21.205 11:31:24 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:21.205 11:31:24 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:21.205 11:31:24 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:21.205 11:31:24 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:21.205 11:31:24 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:21.205 11:31:24 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.205 11:31:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.205 11:31:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.205 11:31:24 -- common/autotest_common.sh@10 -- # set +x 00:07:21.205 ************************************ 00:07:21.205 START TEST nvmf_tcp 00:07:21.205 ************************************ 00:07:21.205 11:31:24 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.205 * Looking for test storage... 00:07:21.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.205 11:31:24 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.205 11:31:24 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.205 11:31:24 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.205 11:31:24 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.205 11:31:24 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.205 11:31:24 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.206 11:31:24 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.206 11:31:24 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:21.206 11:31:24 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:21.206 11:31:24 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.206 11:31:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:21.206 11:31:24 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:21.206 11:31:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.206 11:31:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.206 11:31:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.206 ************************************ 00:07:21.206 START TEST nvmf_host_management 00:07:21.206 ************************************ 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:21.206 * Looking for test storage... 00:07:21.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.206 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.464 11:31:24 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:21.465 Cannot find device "nvmf_init_br" 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:21.465 Cannot find device "nvmf_tgt_br" 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:21.465 Cannot find device "nvmf_tgt_br2" 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:21.465 Cannot find device "nvmf_init_br" 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:21.465 Cannot find device "nvmf_tgt_br" 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:21.465 Cannot find device "nvmf_tgt_br2" 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:21.465 Cannot find device "nvmf_br" 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:21.465 Cannot find device "nvmf_init_if" 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:21.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:21.465 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:21.465 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:21.723 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:21.723 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:21.723 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:21.723 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:21.723 11:31:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:21.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:07:21.723 00:07:21.723 --- 10.0.0.2 ping statistics --- 00:07:21.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.723 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:21.723 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:21.723 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:07:21.723 00:07:21.723 --- 10.0.0.3 ping statistics --- 00:07:21.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.723 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:21.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:21.723 00:07:21.723 --- 10.0.0.1 ping statistics --- 00:07:21.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.723 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65007 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65007 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65007 ']' 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.723 11:31:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.982 [2024-07-12 11:31:25.181619] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:21.982 [2024-07-12 11:31:25.181711] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.982 [2024-07-12 11:31:25.325160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.240 [2024-07-12 11:31:25.443045] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.240 [2024-07-12 11:31:25.443109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.240 [2024-07-12 11:31:25.443121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.240 [2024-07-12 11:31:25.443130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.240 [2024-07-12 11:31:25.443137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.240 [2024-07-12 11:31:25.443318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.240 [2024-07-12 11:31:25.443414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.240 [2024-07-12 11:31:25.443679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.240 [2024-07-12 11:31:25.443680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.240 [2024-07-12 11:31:25.497396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.807 [2024-07-12 11:31:26.226311] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:22.807 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.065 Malloc0 00:07:23.065 [2024-07-12 11:31:26.302500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65071 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65071 /var/tmp/bdevperf.sock 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65071 ']' 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:23.065 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.066 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:23.066 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.066 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:23.066 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:23.066 { 00:07:23.066 "params": { 00:07:23.066 "name": "Nvme$subsystem", 00:07:23.066 "trtype": "$TEST_TRANSPORT", 00:07:23.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:23.066 "adrfam": "ipv4", 00:07:23.066 "trsvcid": "$NVMF_PORT", 00:07:23.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:23.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:23.066 "hdgst": ${hdgst:-false}, 00:07:23.066 "ddgst": ${ddgst:-false} 00:07:23.066 }, 00:07:23.066 "method": "bdev_nvme_attach_controller" 00:07:23.066 } 00:07:23.066 EOF 00:07:23.066 )") 00:07:23.066 11:31:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.066 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:23.066 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:23.066 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:23.066 11:31:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:23.066 "params": { 00:07:23.066 "name": "Nvme0", 00:07:23.066 "trtype": "tcp", 00:07:23.066 "traddr": "10.0.0.2", 00:07:23.066 "adrfam": "ipv4", 00:07:23.066 "trsvcid": "4420", 00:07:23.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:23.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:23.066 "hdgst": false, 00:07:23.066 "ddgst": false 00:07:23.066 }, 00:07:23.066 "method": "bdev_nvme_attach_controller" 00:07:23.066 }' 00:07:23.066 [2024-07-12 11:31:26.402198] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:23.066 [2024-07-12 11:31:26.402300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65071 ] 00:07:23.323 [2024-07-12 11:31:26.546162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.323 [2024-07-12 11:31:26.675966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.323 [2024-07-12 11:31:26.743070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.582 Running I/O for 10 seconds... 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.151 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.151 [2024-07-12 11:31:27.495726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.495995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.151 [2024-07-12 11:31:27.496396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 [2024-07-12 11:31:27.496662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c0950 is same with the state(5) to be set 00:07:24.152 task offset: 114688 on job bdev=Nvme0n1 fails 00:07:24.152 00:07:24.152 Latency(us) 00:07:24.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.152 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:24.152 Job: Nvme0n1 ended in about 0.63 seconds with error 00:07:24.152 Verification LBA range: start 0x0 length 0x400 00:07:24.152 Nvme0n1 : 0.63 1418.56 88.66 101.33 0.00 40865.12 3664.06 42181.35 00:07:24.152 =================================================================================================================== 00:07:24.152 Total : 1418.56 88.66 101.33 0.00 40865.12 3664.06 42181.35 00:07:24.152 [2024-07-12 11:31:27.496773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.496817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.496854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.496871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.496884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.496894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.496905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.496914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.496925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.496935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.496946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.496955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.496966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.496975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.496990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.152 [2024-07-12 11:31:27.497506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.152 [2024-07-12 11:31:27.497516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.497984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.497993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.153 [2024-07-12 11:31:27.498262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.498273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61ec0 is same with the state(5) to be set 00:07:24.153 [2024-07-12 11:31:27.498343] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd61ec0 was disconnected and freed. reset controller. 00:07:24.153 [2024-07-12 11:31:27.499499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:24.153 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.153 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:24.153 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.153 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.153 [2024-07-12 11:31:27.501468] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.153 [2024-07-12 11:31:27.501492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd59d50 (9): Bad file descriptor 00:07:24.153 [2024-07-12 11:31:27.502973] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:24.153 [2024-07-12 11:31:27.503058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:24.153 [2024-07-12 11:31:27.503083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.153 [2024-07-12 11:31:27.503099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:24.153 [2024-07-12 11:31:27.503109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:24.153 [2024-07-12 11:31:27.503118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:24.153 [2024-07-12 11:31:27.503128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd59d50 00:07:24.153 [2024-07-12 11:31:27.503161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd59d50 (9): Bad file descriptor 00:07:24.153 [2024-07-12 11:31:27.503179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:24.153 [2024-07-12 11:31:27.503189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:24.153 [2024-07-12 11:31:27.503200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:24.153 [2024-07-12 11:31:27.503216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.153 11:31:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.154 11:31:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65071 00:07:25.160 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65071) - No such process 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:25.160 { 00:07:25.160 "params": { 00:07:25.160 "name": "Nvme$subsystem", 00:07:25.160 "trtype": "$TEST_TRANSPORT", 00:07:25.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.160 "adrfam": "ipv4", 00:07:25.160 "trsvcid": "$NVMF_PORT", 00:07:25.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.160 "hdgst": ${hdgst:-false}, 00:07:25.160 "ddgst": ${ddgst:-false} 00:07:25.160 }, 00:07:25.160 "method": "bdev_nvme_attach_controller" 00:07:25.160 } 00:07:25.160 EOF 00:07:25.160 )") 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:25.160 11:31:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:25.160 "params": { 00:07:25.160 "name": "Nvme0", 00:07:25.160 "trtype": "tcp", 00:07:25.160 "traddr": "10.0.0.2", 00:07:25.160 "adrfam": "ipv4", 00:07:25.160 "trsvcid": "4420", 00:07:25.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:25.160 "hdgst": false, 00:07:25.160 "ddgst": false 00:07:25.160 }, 00:07:25.160 "method": "bdev_nvme_attach_controller" 00:07:25.160 }' 00:07:25.160 [2024-07-12 11:31:28.561700] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:25.160 [2024-07-12 11:31:28.561786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65109 ] 00:07:25.418 [2024-07-12 11:31:28.697760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.418 [2024-07-12 11:31:28.804483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.418 [2024-07-12 11:31:28.865786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.677 Running I/O for 1 seconds... 00:07:26.610 00:07:26.610 Latency(us) 00:07:26.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.610 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:26.610 Verification LBA range: start 0x0 length 0x400 00:07:26.610 Nvme0n1 : 1.01 1521.96 95.12 0.00 0.00 41218.14 4349.21 37891.72 00:07:26.610 =================================================================================================================== 00:07:26.610 Total : 1521.96 95.12 0.00 0.00 41218.14 4349.21 37891.72 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:26.867 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:26.867 rmmod nvme_tcp 00:07:26.867 rmmod nvme_fabrics 00:07:26.867 rmmod nvme_keyring 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65007 ']' 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65007 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65007 ']' 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65007 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65007 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:27.124 killing process with pid 65007 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65007' 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65007 00:07:27.124 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65007 00:07:27.382 [2024-07-12 11:31:30.586040] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:27.382 00:07:27.382 real 0m6.100s 00:07:27.382 user 0m23.383s 00:07:27.382 sys 0m1.513s 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.382 11:31:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.382 ************************************ 00:07:27.382 END TEST nvmf_host_management 00:07:27.382 ************************************ 00:07:27.382 11:31:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:27.382 11:31:30 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:27.382 11:31:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:27.382 11:31:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.382 11:31:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.382 ************************************ 00:07:27.382 START TEST nvmf_lvol 00:07:27.382 ************************************ 00:07:27.382 11:31:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:27.382 * Looking for test storage... 00:07:27.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:27.382 11:31:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:27.382 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:27.382 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.382 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:27.383 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:27.640 Cannot find device "nvmf_tgt_br" 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:27.640 Cannot find device "nvmf_tgt_br2" 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:27.640 Cannot find device "nvmf_tgt_br" 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:27.640 Cannot find device "nvmf_tgt_br2" 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:27.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:27.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:27.640 11:31:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:27.640 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:27.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:27.947 00:07:27.947 --- 10.0.0.2 ping statistics --- 00:07:27.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.947 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:27.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:27.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:07:27.947 00:07:27.947 --- 10.0.0.3 ping statistics --- 00:07:27.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.947 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:27.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:07:27.947 00:07:27.947 --- 10.0.0.1 ping statistics --- 00:07:27.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.947 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.947 11:31:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65318 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65318 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65318 ']' 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.948 11:31:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:27.948 [2024-07-12 11:31:31.184466] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:27.948 [2024-07-12 11:31:31.184570] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.948 [2024-07-12 11:31:31.317037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.205 [2024-07-12 11:31:31.436042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.205 [2024-07-12 11:31:31.436116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.205 [2024-07-12 11:31:31.436127] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.205 [2024-07-12 11:31:31.436135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.205 [2024-07-12 11:31:31.436143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.205 [2024-07-12 11:31:31.436614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.205 [2024-07-12 11:31:31.436816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.205 [2024-07-12 11:31:31.436820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.205 [2024-07-12 11:31:31.490602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.768 11:31:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.768 11:31:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:28.768 11:31:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.768 11:31:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.768 11:31:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.768 11:31:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.768 11:31:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.025 [2024-07-12 11:31:32.438647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.025 11:31:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:29.284 11:31:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:29.284 11:31:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:29.847 11:31:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:29.847 11:31:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:29.847 11:31:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:30.411 11:31:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e66883bb-9839-4a7f-b0aa-3035266a5f77 00:07:30.411 11:31:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e66883bb-9839-4a7f-b0aa-3035266a5f77 lvol 20 00:07:30.411 11:31:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c8d96921-2cbf-4c9b-bc0c-403d3432b3cb 00:07:30.411 11:31:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.669 11:31:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c8d96921-2cbf-4c9b-bc0c-403d3432b3cb 00:07:31.235 11:31:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.492 [2024-07-12 11:31:34.685302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.492 11:31:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.750 11:31:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:31.750 11:31:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65400 00:07:31.750 11:31:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:32.681 11:31:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c8d96921-2cbf-4c9b-bc0c-403d3432b3cb MY_SNAPSHOT 00:07:32.939 11:31:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f44365f9-135c-48a8-8f0f-74b849f39144 00:07:32.939 11:31:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c8d96921-2cbf-4c9b-bc0c-403d3432b3cb 30 00:07:33.197 11:31:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f44365f9-135c-48a8-8f0f-74b849f39144 MY_CLONE 00:07:33.454 11:31:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=816729ba-8fac-43d4-93a3-d7ea2871a27c 00:07:33.454 11:31:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 816729ba-8fac-43d4-93a3-d7ea2871a27c 00:07:34.019 11:31:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65400 00:07:42.125 Initializing NVMe Controllers 00:07:42.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:42.126 Controller IO queue size 128, less than required. 00:07:42.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:42.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:42.126 Initialization complete. Launching workers. 00:07:42.126 ======================================================== 00:07:42.126 Latency(us) 00:07:42.126 Device Information : IOPS MiB/s Average min max 00:07:42.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10326.40 40.34 12402.53 2274.97 84436.47 00:07:42.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10388.90 40.58 12328.68 377.83 90588.08 00:07:42.126 ======================================================== 00:07:42.126 Total : 20715.30 80.92 12365.49 377.83 90588.08 00:07:42.126 00:07:42.126 11:31:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.126 11:31:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c8d96921-2cbf-4c9b-bc0c-403d3432b3cb 00:07:42.384 11:31:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e66883bb-9839-4a7f-b0aa-3035266a5f77 00:07:42.642 11:31:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:42.642 11:31:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:42.642 11:31:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:42.642 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.642 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.901 rmmod nvme_tcp 00:07:42.901 rmmod nvme_fabrics 00:07:42.901 rmmod nvme_keyring 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65318 ']' 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65318 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65318 ']' 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65318 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65318 00:07:42.901 killing process with pid 65318 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65318' 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65318 00:07:42.901 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65318 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:43.159 00:07:43.159 real 0m15.812s 00:07:43.159 user 1m5.612s 00:07:43.159 sys 0m4.394s 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:43.159 ************************************ 00:07:43.159 END TEST nvmf_lvol 00:07:43.159 ************************************ 00:07:43.159 11:31:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:43.159 11:31:46 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:43.159 11:31:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.159 11:31:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.159 11:31:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.159 ************************************ 00:07:43.159 START TEST nvmf_lvs_grow 00:07:43.159 ************************************ 00:07:43.159 11:31:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:43.418 * Looking for test storage... 00:07:43.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:43.418 11:31:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:43.419 Cannot find device "nvmf_tgt_br" 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:43.419 Cannot find device "nvmf_tgt_br2" 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:43.419 Cannot find device "nvmf_tgt_br" 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:43.419 Cannot find device "nvmf_tgt_br2" 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:43.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:43.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:43.419 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:43.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:07:43.678 00:07:43.678 --- 10.0.0.2 ping statistics --- 00:07:43.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.678 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:43.678 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:43.678 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:07:43.678 00:07:43.678 --- 10.0.0.3 ping statistics --- 00:07:43.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.678 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:43.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:07:43.678 00:07:43.678 --- 10.0.0.1 ping statistics --- 00:07:43.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.678 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.678 11:31:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65722 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65722 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65722 ']' 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.678 11:31:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.678 [2024-07-12 11:31:47.083872] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:43.678 [2024-07-12 11:31:47.084001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.937 [2024-07-12 11:31:47.227602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.937 [2024-07-12 11:31:47.365495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.937 [2024-07-12 11:31:47.365565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.937 [2024-07-12 11:31:47.365603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.937 [2024-07-12 11:31:47.365615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.937 [2024-07-12 11:31:47.365623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.937 [2024-07-12 11:31:47.365654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.196 [2024-07-12 11:31:47.424427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.762 11:31:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.762 11:31:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:44.762 11:31:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.762 11:31:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.762 11:31:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.762 11:31:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.762 11:31:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:45.021 [2024-07-12 11:31:48.350666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.021 ************************************ 00:07:45.021 START TEST lvs_grow_clean 00:07:45.021 ************************************ 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.021 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.366 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:45.366 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:45.625 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3799fa97-8e17-41df-a16d-91d01d733f63 00:07:45.625 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:07:45.625 11:31:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:45.883 11:31:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:45.883 11:31:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:45.883 11:31:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3799fa97-8e17-41df-a16d-91d01d733f63 lvol 150 00:07:46.141 11:31:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648 00:07:46.141 11:31:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:46.141 11:31:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:46.398 [2024-07-12 11:31:49.737528] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:46.398 [2024-07-12 11:31:49.737654] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:46.398 true 00:07:46.398 11:31:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:07:46.399 11:31:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:46.656 11:31:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:46.656 11:31:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:46.914 11:31:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648 00:07:47.481 11:31:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:47.481 [2024-07-12 11:31:50.838153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.481 11:31:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65810 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65810 /var/tmp/bdevperf.sock 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65810 ']' 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.740 11:31:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:48.000 [2024-07-12 11:31:51.191142] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:07:48.000 [2024-07-12 11:31:51.191253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65810 ] 00:07:48.000 [2024-07-12 11:31:51.330408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.257 [2024-07-12 11:31:51.459962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.257 [2024-07-12 11:31:51.518266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.824 11:31:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.824 11:31:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:48.824 11:31:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.082 Nvme0n1 00:07:49.082 11:31:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:49.364 [ 00:07:49.364 { 00:07:49.364 "name": "Nvme0n1", 00:07:49.364 "aliases": [ 00:07:49.364 "f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648" 00:07:49.364 ], 00:07:49.364 "product_name": "NVMe disk", 00:07:49.364 "block_size": 4096, 00:07:49.364 "num_blocks": 38912, 00:07:49.364 "uuid": "f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648", 00:07:49.364 "assigned_rate_limits": { 00:07:49.364 "rw_ios_per_sec": 0, 00:07:49.364 "rw_mbytes_per_sec": 0, 00:07:49.364 "r_mbytes_per_sec": 0, 00:07:49.364 "w_mbytes_per_sec": 0 00:07:49.364 }, 00:07:49.364 "claimed": false, 00:07:49.364 "zoned": false, 00:07:49.364 "supported_io_types": { 00:07:49.364 "read": true, 00:07:49.364 "write": true, 00:07:49.364 "unmap": true, 00:07:49.364 "flush": true, 00:07:49.364 "reset": true, 00:07:49.364 "nvme_admin": true, 00:07:49.364 "nvme_io": true, 00:07:49.364 "nvme_io_md": false, 00:07:49.364 "write_zeroes": true, 00:07:49.364 "zcopy": false, 00:07:49.364 "get_zone_info": false, 00:07:49.364 "zone_management": false, 00:07:49.364 "zone_append": false, 00:07:49.364 "compare": true, 00:07:49.364 "compare_and_write": true, 00:07:49.364 "abort": true, 00:07:49.364 "seek_hole": false, 00:07:49.364 "seek_data": false, 00:07:49.364 "copy": true, 00:07:49.364 "nvme_iov_md": false 00:07:49.364 }, 00:07:49.364 "memory_domains": [ 00:07:49.364 { 00:07:49.364 "dma_device_id": "system", 00:07:49.364 "dma_device_type": 1 00:07:49.364 } 00:07:49.364 ], 00:07:49.364 "driver_specific": { 00:07:49.364 "nvme": [ 00:07:49.364 { 00:07:49.364 "trid": { 00:07:49.364 "trtype": "TCP", 00:07:49.364 "adrfam": "IPv4", 00:07:49.364 "traddr": "10.0.0.2", 00:07:49.364 "trsvcid": "4420", 00:07:49.365 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:49.365 }, 00:07:49.365 "ctrlr_data": { 00:07:49.365 "cntlid": 1, 00:07:49.365 "vendor_id": "0x8086", 00:07:49.365 "model_number": "SPDK bdev Controller", 00:07:49.365 "serial_number": "SPDK0", 00:07:49.365 "firmware_revision": "24.09", 00:07:49.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.365 "oacs": { 00:07:49.365 "security": 0, 00:07:49.365 "format": 0, 00:07:49.365 "firmware": 0, 00:07:49.365 "ns_manage": 0 00:07:49.365 }, 00:07:49.365 "multi_ctrlr": true, 00:07:49.365 "ana_reporting": false 00:07:49.365 }, 00:07:49.365 "vs": { 00:07:49.365 "nvme_version": "1.3" 00:07:49.365 }, 00:07:49.365 "ns_data": { 00:07:49.365 "id": 1, 00:07:49.365 "can_share": true 00:07:49.365 } 00:07:49.365 } 00:07:49.365 ], 00:07:49.365 "mp_policy": "active_passive" 00:07:49.365 } 00:07:49.365 } 00:07:49.365 ] 00:07:49.365 11:31:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65834 00:07:49.365 11:31:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:49.365 11:31:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:49.623 Running I/O for 10 seconds... 00:07:50.562 Latency(us) 00:07:50.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.562 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:50.562 =================================================================================================================== 00:07:50.562 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:50.562 00:07:51.494 11:31:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:07:51.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.494 Nvme0n1 : 2.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:07:51.494 =================================================================================================================== 00:07:51.494 Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:07:51.494 00:07:51.750 true 00:07:51.750 11:31:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:07:51.750 11:31:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:52.006 11:31:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:52.006 11:31:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:52.006 11:31:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65834 00:07:52.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.574 Nvme0n1 : 3.00 7175.00 28.03 0.00 0.00 0.00 0.00 0.00 00:07:52.574 =================================================================================================================== 00:07:52.574 Total : 7175.00 28.03 0.00 0.00 0.00 0.00 0.00 00:07:52.574 00:07:53.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.518 Nvme0n1 : 4.00 7191.00 28.09 0.00 0.00 0.00 0.00 0.00 00:07:53.518 =================================================================================================================== 00:07:53.518 Total : 7191.00 28.09 0.00 0.00 0.00 0.00 0.00 00:07:53.518 00:07:54.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.454 Nvme0n1 : 5.00 7175.20 28.03 0.00 0.00 0.00 0.00 0.00 00:07:54.454 =================================================================================================================== 00:07:54.454 Total : 7175.20 28.03 0.00 0.00 0.00 0.00 0.00 00:07:54.454 00:07:55.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.388 Nvme0n1 : 6.00 7143.50 27.90 0.00 0.00 0.00 0.00 0.00 00:07:55.388 =================================================================================================================== 00:07:55.388 Total : 7143.50 27.90 0.00 0.00 0.00 0.00 0.00 00:07:55.388 00:07:56.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.796 Nvme0n1 : 7.00 7175.29 28.03 0.00 0.00 0.00 0.00 0.00 00:07:56.796 =================================================================================================================== 00:07:56.796 Total : 7175.29 28.03 0.00 0.00 0.00 0.00 0.00 00:07:56.796 00:07:57.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.731 Nvme0n1 : 8.00 7167.38 28.00 0.00 0.00 0.00 0.00 0.00 00:07:57.731 =================================================================================================================== 00:07:57.731 Total : 7167.38 28.00 0.00 0.00 0.00 0.00 0.00 00:07:57.731 00:07:58.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.664 Nvme0n1 : 9.00 7147.11 27.92 0.00 0.00 0.00 0.00 0.00 00:07:58.664 =================================================================================================================== 00:07:58.664 Total : 7147.11 27.92 0.00 0.00 0.00 0.00 0.00 00:07:58.664 00:07:59.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.599 Nvme0n1 : 10.00 7143.60 27.90 0.00 0.00 0.00 0.00 0.00 00:07:59.599 =================================================================================================================== 00:07:59.599 Total : 7143.60 27.90 0.00 0.00 0.00 0.00 0.00 00:07:59.599 00:07:59.599 00:07:59.599 Latency(us) 00:07:59.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.599 Nvme0n1 : 10.01 7151.67 27.94 0.00 0.00 17892.05 10545.34 56718.43 00:07:59.599 =================================================================================================================== 00:07:59.599 Total : 7151.67 27.94 0.00 0.00 17892.05 10545.34 56718.43 00:07:59.599 0 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65810 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65810 ']' 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65810 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65810 00:07:59.599 killing process with pid 65810 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65810' 00:07:59.599 Received shutdown signal, test time was about 10.000000 seconds 00:07:59.599 00:07:59.599 Latency(us) 00:07:59.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.599 =================================================================================================================== 00:07:59.599 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65810 00:07:59.599 11:32:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65810 00:07:59.856 11:32:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.114 11:32:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.371 11:32:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:08:00.371 11:32:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:00.630 11:32:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:00.630 11:32:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:00.630 11:32:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:00.888 [2024-07-12 11:32:04.183225] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:00.888 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:08:01.147 request: 00:08:01.147 { 00:08:01.147 "uuid": "3799fa97-8e17-41df-a16d-91d01d733f63", 00:08:01.147 "method": "bdev_lvol_get_lvstores", 00:08:01.147 "req_id": 1 00:08:01.147 } 00:08:01.147 Got JSON-RPC error response 00:08:01.147 response: 00:08:01.147 { 00:08:01.147 "code": -19, 00:08:01.147 "message": "No such device" 00:08:01.147 } 00:08:01.147 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:01.147 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:01.147 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:01.147 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:01.147 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.406 aio_bdev 00:08:01.406 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648 00:08:01.406 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648 00:08:01.406 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:01.406 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:01.406 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:01.406 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:01.406 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:01.664 11:32:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648 -t 2000 00:08:01.922 [ 00:08:01.922 { 00:08:01.922 "name": "f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648", 00:08:01.922 "aliases": [ 00:08:01.922 "lvs/lvol" 00:08:01.922 ], 00:08:01.922 "product_name": "Logical Volume", 00:08:01.922 "block_size": 4096, 00:08:01.922 "num_blocks": 38912, 00:08:01.922 "uuid": "f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648", 00:08:01.922 "assigned_rate_limits": { 00:08:01.922 "rw_ios_per_sec": 0, 00:08:01.922 "rw_mbytes_per_sec": 0, 00:08:01.922 "r_mbytes_per_sec": 0, 00:08:01.922 "w_mbytes_per_sec": 0 00:08:01.922 }, 00:08:01.922 "claimed": false, 00:08:01.922 "zoned": false, 00:08:01.922 "supported_io_types": { 00:08:01.922 "read": true, 00:08:01.922 "write": true, 00:08:01.922 "unmap": true, 00:08:01.922 "flush": false, 00:08:01.922 "reset": true, 00:08:01.922 "nvme_admin": false, 00:08:01.922 "nvme_io": false, 00:08:01.922 "nvme_io_md": false, 00:08:01.922 "write_zeroes": true, 00:08:01.922 "zcopy": false, 00:08:01.922 "get_zone_info": false, 00:08:01.922 "zone_management": false, 00:08:01.922 "zone_append": false, 00:08:01.922 "compare": false, 00:08:01.922 "compare_and_write": false, 00:08:01.922 "abort": false, 00:08:01.922 "seek_hole": true, 00:08:01.923 "seek_data": true, 00:08:01.923 "copy": false, 00:08:01.923 "nvme_iov_md": false 00:08:01.923 }, 00:08:01.923 "driver_specific": { 00:08:01.923 "lvol": { 00:08:01.923 "lvol_store_uuid": "3799fa97-8e17-41df-a16d-91d01d733f63", 00:08:01.923 "base_bdev": "aio_bdev", 00:08:01.923 "thin_provision": false, 00:08:01.923 "num_allocated_clusters": 38, 00:08:01.923 "snapshot": false, 00:08:01.923 "clone": false, 00:08:01.923 "esnap_clone": false 00:08:01.923 } 00:08:01.923 } 00:08:01.923 } 00:08:01.923 ] 00:08:01.923 11:32:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:01.923 11:32:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:08:01.923 11:32:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:02.181 11:32:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:02.181 11:32:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:08:02.181 11:32:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:02.440 11:32:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:02.440 11:32:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f8b1f3ba-1a3e-4ab0-9a65-3a7620c7b648 00:08:02.715 11:32:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3799fa97-8e17-41df-a16d-91d01d733f63 00:08:02.974 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.974 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:03.541 00:08:03.542 real 0m18.394s 00:08:03.542 user 0m17.215s 00:08:03.542 sys 0m2.622s 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:03.542 ************************************ 00:08:03.542 END TEST lvs_grow_clean 00:08:03.542 ************************************ 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:03.542 ************************************ 00:08:03.542 START TEST lvs_grow_dirty 00:08:03.542 ************************************ 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:03.542 11:32:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.800 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:03.800 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:04.058 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:04.058 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:04.058 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:04.358 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:04.358 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:04.358 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e566f710-9aad-4369-94fa-9f9ffbc121ad lvol 150 00:08:04.616 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=84cabfb4-4073-494d-8a38-2f7f0163e290 00:08:04.616 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.616 11:32:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:04.875 [2024-07-12 11:32:08.158441] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:04.875 [2024-07-12 11:32:08.158552] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:04.875 true 00:08:04.875 11:32:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:04.875 11:32:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:05.133 11:32:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:05.133 11:32:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:05.392 11:32:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 84cabfb4-4073-494d-8a38-2f7f0163e290 00:08:05.651 11:32:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.909 [2024-07-12 11:32:09.114974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.910 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66081 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66081 /var/tmp/bdevperf.sock 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66081 ']' 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.169 11:32:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.169 [2024-07-12 11:32:09.404104] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:06.169 [2024-07-12 11:32:09.404183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66081 ] 00:08:06.169 [2024-07-12 11:32:09.542821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.427 [2024-07-12 11:32:09.671089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.427 [2024-07-12 11:32:09.729719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.994 11:32:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.994 11:32:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:06.994 11:32:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:07.250 Nvme0n1 00:08:07.250 11:32:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:07.507 [ 00:08:07.507 { 00:08:07.507 "name": "Nvme0n1", 00:08:07.507 "aliases": [ 00:08:07.507 "84cabfb4-4073-494d-8a38-2f7f0163e290" 00:08:07.507 ], 00:08:07.507 "product_name": "NVMe disk", 00:08:07.507 "block_size": 4096, 00:08:07.507 "num_blocks": 38912, 00:08:07.507 "uuid": "84cabfb4-4073-494d-8a38-2f7f0163e290", 00:08:07.507 "assigned_rate_limits": { 00:08:07.507 "rw_ios_per_sec": 0, 00:08:07.507 "rw_mbytes_per_sec": 0, 00:08:07.507 "r_mbytes_per_sec": 0, 00:08:07.507 "w_mbytes_per_sec": 0 00:08:07.507 }, 00:08:07.507 "claimed": false, 00:08:07.507 "zoned": false, 00:08:07.507 "supported_io_types": { 00:08:07.507 "read": true, 00:08:07.507 "write": true, 00:08:07.507 "unmap": true, 00:08:07.507 "flush": true, 00:08:07.507 "reset": true, 00:08:07.507 "nvme_admin": true, 00:08:07.507 "nvme_io": true, 00:08:07.507 "nvme_io_md": false, 00:08:07.507 "write_zeroes": true, 00:08:07.507 "zcopy": false, 00:08:07.507 "get_zone_info": false, 00:08:07.507 "zone_management": false, 00:08:07.507 "zone_append": false, 00:08:07.507 "compare": true, 00:08:07.507 "compare_and_write": true, 00:08:07.507 "abort": true, 00:08:07.507 "seek_hole": false, 00:08:07.507 "seek_data": false, 00:08:07.507 "copy": true, 00:08:07.508 "nvme_iov_md": false 00:08:07.508 }, 00:08:07.508 "memory_domains": [ 00:08:07.508 { 00:08:07.508 "dma_device_id": "system", 00:08:07.508 "dma_device_type": 1 00:08:07.508 } 00:08:07.508 ], 00:08:07.508 "driver_specific": { 00:08:07.508 "nvme": [ 00:08:07.508 { 00:08:07.508 "trid": { 00:08:07.508 "trtype": "TCP", 00:08:07.508 "adrfam": "IPv4", 00:08:07.508 "traddr": "10.0.0.2", 00:08:07.508 "trsvcid": "4420", 00:08:07.508 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:07.508 }, 00:08:07.508 "ctrlr_data": { 00:08:07.508 "cntlid": 1, 00:08:07.508 "vendor_id": "0x8086", 00:08:07.508 "model_number": "SPDK bdev Controller", 00:08:07.508 "serial_number": "SPDK0", 00:08:07.508 "firmware_revision": "24.09", 00:08:07.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:07.508 "oacs": { 00:08:07.508 "security": 0, 00:08:07.508 "format": 0, 00:08:07.508 "firmware": 0, 00:08:07.508 "ns_manage": 0 00:08:07.508 }, 00:08:07.508 "multi_ctrlr": true, 00:08:07.508 "ana_reporting": false 00:08:07.508 }, 00:08:07.508 "vs": { 00:08:07.508 "nvme_version": "1.3" 00:08:07.508 }, 00:08:07.508 "ns_data": { 00:08:07.508 "id": 1, 00:08:07.508 "can_share": true 00:08:07.508 } 00:08:07.508 } 00:08:07.508 ], 00:08:07.508 "mp_policy": "active_passive" 00:08:07.508 } 00:08:07.508 } 00:08:07.508 ] 00:08:07.508 11:32:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66100 00:08:07.508 11:32:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:07.508 11:32:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:07.765 Running I/O for 10 seconds... 00:08:08.736 Latency(us) 00:08:08.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.736 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:08.736 =================================================================================================================== 00:08:08.736 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:08.736 00:08:09.670 11:32:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:09.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.670 Nvme0n1 : 2.00 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:09.670 =================================================================================================================== 00:08:09.670 Total : 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:09.670 00:08:09.929 true 00:08:09.929 11:32:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:09.929 11:32:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:10.187 11:32:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:10.187 11:32:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:10.187 11:32:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66100 00:08:10.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.752 Nvme0n1 : 3.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:10.752 =================================================================================================================== 00:08:10.752 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:10.752 00:08:11.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.686 Nvme0n1 : 4.00 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:08:11.686 =================================================================================================================== 00:08:11.686 Total : 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:08:11.686 00:08:12.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.618 Nvme0n1 : 5.00 7289.80 28.48 0.00 0.00 0.00 0.00 0.00 00:08:12.618 =================================================================================================================== 00:08:12.618 Total : 7289.80 28.48 0.00 0.00 0.00 0.00 0.00 00:08:12.618 00:08:13.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.992 Nvme0n1 : 6.00 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:08:13.992 =================================================================================================================== 00:08:13.992 Total : 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:08:13.992 00:08:14.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.927 Nvme0n1 : 7.00 7229.57 28.24 0.00 0.00 0.00 0.00 0.00 00:08:14.927 =================================================================================================================== 00:08:14.927 Total : 7229.57 28.24 0.00 0.00 0.00 0.00 0.00 00:08:14.927 00:08:15.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.861 Nvme0n1 : 8.00 7199.00 28.12 0.00 0.00 0.00 0.00 0.00 00:08:15.861 =================================================================================================================== 00:08:15.861 Total : 7199.00 28.12 0.00 0.00 0.00 0.00 0.00 00:08:15.861 00:08:16.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.794 Nvme0n1 : 9.00 7175.22 28.03 0.00 0.00 0.00 0.00 0.00 00:08:16.794 =================================================================================================================== 00:08:16.794 Total : 7175.22 28.03 0.00 0.00 0.00 0.00 0.00 00:08:16.794 00:08:17.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.729 Nvme0n1 : 10.00 7168.90 28.00 0.00 0.00 0.00 0.00 0.00 00:08:17.729 =================================================================================================================== 00:08:17.729 Total : 7168.90 28.00 0.00 0.00 0.00 0.00 0.00 00:08:17.729 00:08:17.729 00:08:17.729 Latency(us) 00:08:17.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.729 Nvme0n1 : 10.02 7166.37 27.99 0.00 0.00 17856.24 10545.34 40274.85 00:08:17.729 =================================================================================================================== 00:08:17.729 Total : 7166.37 27.99 0.00 0.00 17856.24 10545.34 40274.85 00:08:17.729 0 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66081 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66081 ']' 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66081 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66081 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:17.729 killing process with pid 66081 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66081' 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66081 00:08:17.729 Received shutdown signal, test time was about 10.000000 seconds 00:08:17.729 00:08:17.729 Latency(us) 00:08:17.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.729 =================================================================================================================== 00:08:17.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:17.729 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66081 00:08:17.988 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.246 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:18.504 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:18.504 11:32:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65722 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65722 00:08:18.766 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65722 Killed "${NVMF_APP[@]}" "$@" 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66235 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66235 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66235 ']' 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.766 11:32:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:18.766 [2024-07-12 11:32:22.128287] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:18.766 [2024-07-12 11:32:22.128399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.024 [2024-07-12 11:32:22.273187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.024 [2024-07-12 11:32:22.386111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.024 [2024-07-12 11:32:22.386159] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.024 [2024-07-12 11:32:22.386177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.024 [2024-07-12 11:32:22.386185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.024 [2024-07-12 11:32:22.386192] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.024 [2024-07-12 11:32:22.386222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.024 [2024-07-12 11:32:22.441205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:19.960 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.960 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:19.960 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.960 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:19.960 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:19.960 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.960 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:20.218 [2024-07-12 11:32:23.428705] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:20.218 [2024-07-12 11:32:23.428985] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:20.218 [2024-07-12 11:32:23.429167] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:20.218 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:20.218 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 84cabfb4-4073-494d-8a38-2f7f0163e290 00:08:20.218 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=84cabfb4-4073-494d-8a38-2f7f0163e290 00:08:20.218 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:20.218 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:20.218 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:20.218 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:20.218 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:20.476 11:32:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84cabfb4-4073-494d-8a38-2f7f0163e290 -t 2000 00:08:20.735 [ 00:08:20.735 { 00:08:20.735 "name": "84cabfb4-4073-494d-8a38-2f7f0163e290", 00:08:20.735 "aliases": [ 00:08:20.735 "lvs/lvol" 00:08:20.735 ], 00:08:20.735 "product_name": "Logical Volume", 00:08:20.735 "block_size": 4096, 00:08:20.735 "num_blocks": 38912, 00:08:20.735 "uuid": "84cabfb4-4073-494d-8a38-2f7f0163e290", 00:08:20.735 "assigned_rate_limits": { 00:08:20.735 "rw_ios_per_sec": 0, 00:08:20.735 "rw_mbytes_per_sec": 0, 00:08:20.735 "r_mbytes_per_sec": 0, 00:08:20.735 "w_mbytes_per_sec": 0 00:08:20.735 }, 00:08:20.735 "claimed": false, 00:08:20.735 "zoned": false, 00:08:20.735 "supported_io_types": { 00:08:20.735 "read": true, 00:08:20.735 "write": true, 00:08:20.735 "unmap": true, 00:08:20.735 "flush": false, 00:08:20.735 "reset": true, 00:08:20.735 "nvme_admin": false, 00:08:20.735 "nvme_io": false, 00:08:20.735 "nvme_io_md": false, 00:08:20.735 "write_zeroes": true, 00:08:20.735 "zcopy": false, 00:08:20.735 "get_zone_info": false, 00:08:20.735 "zone_management": false, 00:08:20.735 "zone_append": false, 00:08:20.735 "compare": false, 00:08:20.735 "compare_and_write": false, 00:08:20.735 "abort": false, 00:08:20.735 "seek_hole": true, 00:08:20.735 "seek_data": true, 00:08:20.735 "copy": false, 00:08:20.735 "nvme_iov_md": false 00:08:20.735 }, 00:08:20.735 "driver_specific": { 00:08:20.735 "lvol": { 00:08:20.735 "lvol_store_uuid": "e566f710-9aad-4369-94fa-9f9ffbc121ad", 00:08:20.735 "base_bdev": "aio_bdev", 00:08:20.735 "thin_provision": false, 00:08:20.735 "num_allocated_clusters": 38, 00:08:20.735 "snapshot": false, 00:08:20.735 "clone": false, 00:08:20.735 "esnap_clone": false 00:08:20.735 } 00:08:20.735 } 00:08:20.735 } 00:08:20.735 ] 00:08:20.735 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:20.735 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:20.735 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:20.994 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:20.994 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:20.994 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:21.253 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:21.253 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:21.511 [2024-07-12 11:32:24.838010] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:21.511 11:32:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:21.769 request: 00:08:21.769 { 00:08:21.769 "uuid": "e566f710-9aad-4369-94fa-9f9ffbc121ad", 00:08:21.769 "method": "bdev_lvol_get_lvstores", 00:08:21.769 "req_id": 1 00:08:21.769 } 00:08:21.769 Got JSON-RPC error response 00:08:21.769 response: 00:08:21.769 { 00:08:21.769 "code": -19, 00:08:21.769 "message": "No such device" 00:08:21.769 } 00:08:21.769 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:21.769 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:21.769 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:21.769 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:21.769 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.027 aio_bdev 00:08:22.027 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 84cabfb4-4073-494d-8a38-2f7f0163e290 00:08:22.027 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=84cabfb4-4073-494d-8a38-2f7f0163e290 00:08:22.027 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:22.027 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:22.027 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:22.027 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:22.027 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.284 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84cabfb4-4073-494d-8a38-2f7f0163e290 -t 2000 00:08:22.542 [ 00:08:22.542 { 00:08:22.542 "name": "84cabfb4-4073-494d-8a38-2f7f0163e290", 00:08:22.542 "aliases": [ 00:08:22.542 "lvs/lvol" 00:08:22.542 ], 00:08:22.542 "product_name": "Logical Volume", 00:08:22.542 "block_size": 4096, 00:08:22.542 "num_blocks": 38912, 00:08:22.542 "uuid": "84cabfb4-4073-494d-8a38-2f7f0163e290", 00:08:22.542 "assigned_rate_limits": { 00:08:22.542 "rw_ios_per_sec": 0, 00:08:22.542 "rw_mbytes_per_sec": 0, 00:08:22.542 "r_mbytes_per_sec": 0, 00:08:22.542 "w_mbytes_per_sec": 0 00:08:22.542 }, 00:08:22.542 "claimed": false, 00:08:22.542 "zoned": false, 00:08:22.542 "supported_io_types": { 00:08:22.542 "read": true, 00:08:22.542 "write": true, 00:08:22.542 "unmap": true, 00:08:22.542 "flush": false, 00:08:22.542 "reset": true, 00:08:22.542 "nvme_admin": false, 00:08:22.542 "nvme_io": false, 00:08:22.542 "nvme_io_md": false, 00:08:22.542 "write_zeroes": true, 00:08:22.542 "zcopy": false, 00:08:22.542 "get_zone_info": false, 00:08:22.542 "zone_management": false, 00:08:22.542 "zone_append": false, 00:08:22.542 "compare": false, 00:08:22.542 "compare_and_write": false, 00:08:22.542 "abort": false, 00:08:22.542 "seek_hole": true, 00:08:22.542 "seek_data": true, 00:08:22.542 "copy": false, 00:08:22.542 "nvme_iov_md": false 00:08:22.542 }, 00:08:22.542 "driver_specific": { 00:08:22.542 "lvol": { 00:08:22.542 "lvol_store_uuid": "e566f710-9aad-4369-94fa-9f9ffbc121ad", 00:08:22.542 "base_bdev": "aio_bdev", 00:08:22.542 "thin_provision": false, 00:08:22.542 "num_allocated_clusters": 38, 00:08:22.542 "snapshot": false, 00:08:22.542 "clone": false, 00:08:22.542 "esnap_clone": false 00:08:22.542 } 00:08:22.542 } 00:08:22.542 } 00:08:22.542 ] 00:08:22.542 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:22.542 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:22.542 11:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:22.800 11:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:22.800 11:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:22.800 11:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:23.058 11:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:23.058 11:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 84cabfb4-4073-494d-8a38-2f7f0163e290 00:08:23.317 11:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e566f710-9aad-4369-94fa-9f9ffbc121ad 00:08:23.576 11:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.834 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:24.092 00:08:24.092 real 0m20.543s 00:08:24.092 user 0m43.195s 00:08:24.092 sys 0m8.170s 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:24.092 ************************************ 00:08:24.092 END TEST lvs_grow_dirty 00:08:24.092 ************************************ 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:24.092 nvmf_trace.0 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.092 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.352 rmmod nvme_tcp 00:08:24.352 rmmod nvme_fabrics 00:08:24.352 rmmod nvme_keyring 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66235 ']' 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66235 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66235 ']' 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66235 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66235 00:08:24.352 killing process with pid 66235 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66235' 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66235 00:08:24.352 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66235 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:24.611 00:08:24.611 real 0m41.436s 00:08:24.611 user 1m6.887s 00:08:24.611 sys 0m11.549s 00:08:24.611 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.611 ************************************ 00:08:24.611 11:32:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.611 END TEST nvmf_lvs_grow 00:08:24.611 ************************************ 00:08:24.611 11:32:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:24.611 11:32:28 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:24.611 11:32:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:24.611 11:32:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.611 11:32:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.611 ************************************ 00:08:24.611 START TEST nvmf_bdev_io_wait 00:08:24.611 ************************************ 00:08:24.611 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:24.871 * Looking for test storage... 00:08:24.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:24.871 Cannot find device "nvmf_tgt_br" 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:24.871 Cannot find device "nvmf_tgt_br2" 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:24.871 Cannot find device "nvmf_tgt_br" 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:24.871 Cannot find device "nvmf_tgt_br2" 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:24.871 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:24.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:24.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:24.872 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:25.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:08:25.131 00:08:25.131 --- 10.0.0.2 ping statistics --- 00:08:25.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.131 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:25.131 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:25.131 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:25.131 00:08:25.131 --- 10.0.0.3 ping statistics --- 00:08:25.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.131 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:25.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:25.131 00:08:25.131 --- 10.0.0.1 ping statistics --- 00:08:25.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.131 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66552 00:08:25.131 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66552 00:08:25.132 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66552 ']' 00:08:25.132 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.132 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.132 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.132 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.132 11:32:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:25.132 [2024-07-12 11:32:28.561887] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:25.132 [2024-07-12 11:32:28.562004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.390 [2024-07-12 11:32:28.698898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.390 [2024-07-12 11:32:28.826561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.390 [2024-07-12 11:32:28.826793] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.390 [2024-07-12 11:32:28.826924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.390 [2024-07-12 11:32:28.827048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.390 [2024-07-12 11:32:28.827090] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.390 [2024-07-12 11:32:28.827335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.390 [2024-07-12 11:32:28.827480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.390 [2024-07-12 11:32:28.827625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.390 [2024-07-12 11:32:28.827625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.326 [2024-07-12 11:32:29.621165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.326 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.327 [2024-07-12 11:32:29.633477] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.327 Malloc0 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.327 [2024-07-12 11:32:29.692648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66587 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66589 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66591 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:26.327 { 00:08:26.327 "params": { 00:08:26.327 "name": "Nvme$subsystem", 00:08:26.327 "trtype": "$TEST_TRANSPORT", 00:08:26.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.327 "adrfam": "ipv4", 00:08:26.327 "trsvcid": "$NVMF_PORT", 00:08:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.327 "hdgst": ${hdgst:-false}, 00:08:26.327 "ddgst": ${ddgst:-false} 00:08:26.327 }, 00:08:26.327 "method": "bdev_nvme_attach_controller" 00:08:26.327 } 00:08:26.327 EOF 00:08:26.327 )") 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:26.327 { 00:08:26.327 "params": { 00:08:26.327 "name": "Nvme$subsystem", 00:08:26.327 "trtype": "$TEST_TRANSPORT", 00:08:26.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.327 "adrfam": "ipv4", 00:08:26.327 "trsvcid": "$NVMF_PORT", 00:08:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.327 "hdgst": ${hdgst:-false}, 00:08:26.327 "ddgst": ${ddgst:-false} 00:08:26.327 }, 00:08:26.327 "method": "bdev_nvme_attach_controller" 00:08:26.327 } 00:08:26.327 EOF 00:08:26.327 )") 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66593 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:26.327 { 00:08:26.327 "params": { 00:08:26.327 "name": "Nvme$subsystem", 00:08:26.327 "trtype": "$TEST_TRANSPORT", 00:08:26.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.327 "adrfam": "ipv4", 00:08:26.327 "trsvcid": "$NVMF_PORT", 00:08:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.327 "hdgst": ${hdgst:-false}, 00:08:26.327 "ddgst": ${ddgst:-false} 00:08:26.327 }, 00:08:26.327 "method": "bdev_nvme_attach_controller" 00:08:26.327 } 00:08:26.327 EOF 00:08:26.327 )") 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:26.327 { 00:08:26.327 "params": { 00:08:26.327 "name": "Nvme$subsystem", 00:08:26.327 "trtype": "$TEST_TRANSPORT", 00:08:26.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.327 "adrfam": "ipv4", 00:08:26.327 "trsvcid": "$NVMF_PORT", 00:08:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.327 "hdgst": ${hdgst:-false}, 00:08:26.327 "ddgst": ${ddgst:-false} 00:08:26.327 }, 00:08:26.327 "method": "bdev_nvme_attach_controller" 00:08:26.327 } 00:08:26.327 EOF 00:08:26.327 )") 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:26.327 "params": { 00:08:26.327 "name": "Nvme1", 00:08:26.327 "trtype": "tcp", 00:08:26.327 "traddr": "10.0.0.2", 00:08:26.327 "adrfam": "ipv4", 00:08:26.327 "trsvcid": "4420", 00:08:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.327 "hdgst": false, 00:08:26.327 "ddgst": false 00:08:26.327 }, 00:08:26.327 "method": "bdev_nvme_attach_controller" 00:08:26.327 }' 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:26.327 "params": { 00:08:26.327 "name": "Nvme1", 00:08:26.327 "trtype": "tcp", 00:08:26.327 "traddr": "10.0.0.2", 00:08:26.327 "adrfam": "ipv4", 00:08:26.327 "trsvcid": "4420", 00:08:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.327 "hdgst": false, 00:08:26.327 "ddgst": false 00:08:26.327 }, 00:08:26.327 "method": "bdev_nvme_attach_controller" 00:08:26.327 }' 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:26.327 "params": { 00:08:26.327 "name": "Nvme1", 00:08:26.327 "trtype": "tcp", 00:08:26.327 "traddr": "10.0.0.2", 00:08:26.327 "adrfam": "ipv4", 00:08:26.327 "trsvcid": "4420", 00:08:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.327 "hdgst": false, 00:08:26.327 "ddgst": false 00:08:26.327 }, 00:08:26.327 "method": "bdev_nvme_attach_controller" 00:08:26.327 }' 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:26.327 "params": { 00:08:26.327 "name": "Nvme1", 00:08:26.327 "trtype": "tcp", 00:08:26.327 "traddr": "10.0.0.2", 00:08:26.327 "adrfam": "ipv4", 00:08:26.327 "trsvcid": "4420", 00:08:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:26.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:26.327 "hdgst": false, 00:08:26.327 "ddgst": false 00:08:26.327 }, 00:08:26.327 "method": "bdev_nvme_attach_controller" 00:08:26.327 }' 00:08:26.327 11:32:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66587 00:08:26.327 [2024-07-12 11:32:29.756204] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:26.327 [2024-07-12 11:32:29.756459] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:26.327 [2024-07-12 11:32:29.764200] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:26.328 [2024-07-12 11:32:29.764539] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:26.328 [2024-07-12 11:32:29.769523] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:26.328 [2024-07-12 11:32:29.769748] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:26.586 [2024-07-12 11:32:29.784903] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:26.586 [2024-07-12 11:32:29.785235] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:26.586 [2024-07-12 11:32:29.963829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.586 [2024-07-12 11:32:30.033608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.845 [2024-07-12 11:32:30.051824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:26.845 [2024-07-12 11:32:30.106160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.845 [2024-07-12 11:32:30.106656] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.845 [2024-07-12 11:32:30.149210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:26.845 [2024-07-12 11:32:30.187956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.845 [2024-07-12 11:32:30.191940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:26.845 [2024-07-12 11:32:30.198182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.845 Running I/O for 1 seconds... 00:08:26.845 [2024-07-12 11:32:30.236354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.845 [2024-07-12 11:32:30.283099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:27.104 Running I/O for 1 seconds... 00:08:27.104 [2024-07-12 11:32:30.331094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.104 Running I/O for 1 seconds... 00:08:27.104 Running I/O for 1 seconds... 00:08:28.040 00:08:28.040 Latency(us) 00:08:28.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.040 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:28.040 Nvme1n1 : 1.01 10060.81 39.30 0.00 0.00 12661.51 8757.99 20375.74 00:08:28.040 =================================================================================================================== 00:08:28.040 Total : 10060.81 39.30 0.00 0.00 12661.51 8757.99 20375.74 00:08:28.040 00:08:28.040 Latency(us) 00:08:28.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.040 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:28.040 Nvme1n1 : 1.01 7710.79 30.12 0.00 0.00 16502.13 6553.60 25856.93 00:08:28.040 =================================================================================================================== 00:08:28.040 Total : 7710.79 30.12 0.00 0.00 16502.13 6553.60 25856.93 00:08:28.040 00:08:28.040 Latency(us) 00:08:28.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.040 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:28.040 Nvme1n1 : 1.01 8539.17 33.36 0.00 0.00 14928.19 7477.06 25618.62 00:08:28.040 =================================================================================================================== 00:08:28.040 Total : 8539.17 33.36 0.00 0.00 14928.19 7477.06 25618.62 00:08:28.040 00:08:28.040 Latency(us) 00:08:28.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.040 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:28.040 Nvme1n1 : 1.00 174292.53 680.83 0.00 0.00 731.71 344.44 1414.98 00:08:28.040 =================================================================================================================== 00:08:28.040 Total : 174292.53 680.83 0.00 0.00 731.71 344.44 1414.98 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66589 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66591 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66593 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.299 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.299 rmmod nvme_tcp 00:08:28.299 rmmod nvme_fabrics 00:08:28.557 rmmod nvme_keyring 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66552 ']' 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66552 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66552 ']' 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66552 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66552 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:28.557 killing process with pid 66552 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66552' 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66552 00:08:28.557 11:32:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66552 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:28.815 00:08:28.815 real 0m4.000s 00:08:28.815 user 0m17.417s 00:08:28.815 sys 0m2.243s 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.815 11:32:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.815 ************************************ 00:08:28.815 END TEST nvmf_bdev_io_wait 00:08:28.815 ************************************ 00:08:28.816 11:32:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:28.816 11:32:32 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:28.816 11:32:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.816 11:32:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.816 11:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.816 ************************************ 00:08:28.816 START TEST nvmf_queue_depth 00:08:28.816 ************************************ 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:28.816 * Looking for test storage... 00:08:28.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:28.816 Cannot find device "nvmf_tgt_br" 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.816 Cannot find device "nvmf_tgt_br2" 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:28.816 Cannot find device "nvmf_tgt_br" 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:28.816 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:29.075 Cannot find device "nvmf_tgt_br2" 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:29.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:29.075 00:08:29.075 --- 10.0.0.2 ping statistics --- 00:08:29.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.075 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:29.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:08:29.075 00:08:29.075 --- 10.0.0.3 ping statistics --- 00:08:29.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.075 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:29.075 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:29.076 00:08:29.076 --- 10.0.0.1 ping statistics --- 00:08:29.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.076 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:29.076 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.076 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:29.076 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.076 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.076 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.076 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.076 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.076 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.076 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66833 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66833 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66833 ']' 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.334 11:32:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.334 [2024-07-12 11:32:32.605120] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:29.334 [2024-07-12 11:32:32.605221] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.334 [2024-07-12 11:32:32.742081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.593 [2024-07-12 11:32:32.859659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.593 [2024-07-12 11:32:32.859710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.593 [2024-07-12 11:32:32.859722] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.593 [2024-07-12 11:32:32.859730] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.593 [2024-07-12 11:32:32.859738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.593 [2024-07-12 11:32:32.859769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.593 [2024-07-12 11:32:32.914207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.526 [2024-07-12 11:32:33.686374] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.526 Malloc0 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.526 [2024-07-12 11:32:33.759785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66865 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66865 /var/tmp/bdevperf.sock 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66865 ']' 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.526 11:32:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:30.526 [2024-07-12 11:32:33.818895] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:30.526 [2024-07-12 11:32:33.818990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66865 ] 00:08:30.526 [2024-07-12 11:32:33.956086] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.785 [2024-07-12 11:32:34.083472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.785 [2024-07-12 11:32:34.141031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:31.351 11:32:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.352 11:32:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:31.352 11:32:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:31.352 11:32:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.352 11:32:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.609 NVMe0n1 00:08:31.609 11:32:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.610 11:32:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:31.610 Running I/O for 10 seconds... 00:08:43.809 00:08:43.809 Latency(us) 00:08:43.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.809 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:43.809 Verification LBA range: start 0x0 length 0x4000 00:08:43.809 NVMe0n1 : 10.08 8046.19 31.43 0.00 0.00 126690.88 20852.36 93895.21 00:08:43.809 =================================================================================================================== 00:08:43.809 Total : 8046.19 31.43 0.00 0.00 126690.88 20852.36 93895.21 00:08:43.809 0 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66865 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66865 ']' 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66865 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66865 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.809 killing process with pid 66865 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66865' 00:08:43.809 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.809 00:08:43.809 Latency(us) 00:08:43.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.809 =================================================================================================================== 00:08:43.809 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66865 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66865 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.809 rmmod nvme_tcp 00:08:43.809 rmmod nvme_fabrics 00:08:43.809 rmmod nvme_keyring 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66833 ']' 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66833 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66833 ']' 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66833 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66833 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66833' 00:08:43.809 killing process with pid 66833 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66833 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66833 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:43.809 00:08:43.809 real 0m13.642s 00:08:43.809 user 0m23.660s 00:08:43.809 sys 0m2.218s 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.809 11:32:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:43.809 ************************************ 00:08:43.809 END TEST nvmf_queue_depth 00:08:43.809 ************************************ 00:08:43.810 11:32:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:43.810 11:32:45 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:43.810 11:32:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.810 11:32:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.810 11:32:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.810 ************************************ 00:08:43.810 START TEST nvmf_target_multipath 00:08:43.810 ************************************ 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:43.810 * Looking for test storage... 00:08:43.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:43.810 Cannot find device "nvmf_tgt_br" 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.810 Cannot find device "nvmf_tgt_br2" 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:43.810 Cannot find device "nvmf_tgt_br" 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:43.810 Cannot find device "nvmf_tgt_br2" 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:43.810 11:32:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:43.810 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:43.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:43.811 00:08:43.811 --- 10.0.0.2 ping statistics --- 00:08:43.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.811 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:43.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:08:43.811 00:08:43.811 --- 10.0.0.3 ping statistics --- 00:08:43.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.811 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:43.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:43.811 00:08:43.811 --- 10.0.0.1 ping statistics --- 00:08:43.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.811 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67180 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67180 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67180 ']' 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.811 11:32:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:43.811 [2024-07-12 11:32:46.306542] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:08:43.811 [2024-07-12 11:32:46.306637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.811 [2024-07-12 11:32:46.449730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.811 [2024-07-12 11:32:46.566007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.811 [2024-07-12 11:32:46.566060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.811 [2024-07-12 11:32:46.566072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.811 [2024-07-12 11:32:46.566080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.811 [2024-07-12 11:32:46.566088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.811 [2024-07-12 11:32:46.566217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.811 [2024-07-12 11:32:46.566896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.811 [2024-07-12 11:32:46.567027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.811 [2024-07-12 11:32:46.567099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.811 [2024-07-12 11:32:46.621026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:44.069 11:32:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.069 11:32:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:08:44.069 11:32:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:44.069 11:32:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.069 11:32:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:44.069 11:32:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.069 11:32:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:44.327 [2024-07-12 11:32:47.615784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.327 11:32:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:44.585 Malloc0 00:08:44.585 11:32:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:44.843 11:32:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.101 11:32:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.358 [2024-07-12 11:32:48.667798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.358 11:32:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:45.615 [2024-07-12 11:32:48.884063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:45.615 11:32:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:45.615 11:32:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:45.873 11:32:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.873 11:32:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.873 11:32:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.873 11:32:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.873 11:32:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67275 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:47.775 11:32:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:47.775 [global] 00:08:47.775 thread=1 00:08:47.775 invalidate=1 00:08:47.775 rw=randrw 00:08:47.775 time_based=1 00:08:47.775 runtime=6 00:08:47.775 ioengine=libaio 00:08:47.775 direct=1 00:08:47.775 bs=4096 00:08:47.775 iodepth=128 00:08:47.775 norandommap=0 00:08:47.775 numjobs=1 00:08:47.775 00:08:47.775 verify_dump=1 00:08:47.775 verify_backlog=512 00:08:47.775 verify_state_save=0 00:08:47.775 do_verify=1 00:08:47.775 verify=crc32c-intel 00:08:47.775 [job0] 00:08:47.775 filename=/dev/nvme0n1 00:08:48.033 Could not set queue depth (nvme0n1) 00:08:48.033 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:48.033 fio-3.35 00:08:48.033 Starting 1 thread 00:08:48.969 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:49.227 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:49.486 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:49.743 11:32:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:49.743 11:32:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67275 00:08:55.053 00:08:55.053 job0: (groupid=0, jobs=1): err= 0: pid=67296: Fri Jul 12 11:32:57 2024 00:08:55.053 read: IOPS=9587, BW=37.4MiB/s (39.3MB/s)(225MiB/6007msec) 00:08:55.053 slat (usec): min=6, max=6652, avg=62.08, stdev=247.99 00:08:55.053 clat (usec): min=1717, max=19788, avg=9082.42, stdev=1739.67 00:08:55.053 lat (usec): min=1745, max=19802, avg=9144.50, stdev=1746.18 00:08:55.053 clat percentiles (usec): 00:08:55.053 | 1.00th=[ 4555], 5.00th=[ 6783], 10.00th=[ 7570], 20.00th=[ 8094], 00:08:55.053 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:08:55.053 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[12911], 00:08:55.053 | 99.00th=[14484], 99.50th=[15008], 99.90th=[18744], 99.95th=[19006], 00:08:55.053 | 99.99th=[19792] 00:08:55.053 bw ( KiB/s): min= 9072, max=25688, per=51.64%, avg=19803.36, stdev=5264.54, samples=11 00:08:55.053 iops : min= 2268, max= 6422, avg=4950.82, stdev=1316.13, samples=11 00:08:55.053 write: IOPS=5613, BW=21.9MiB/s (23.0MB/s)(119MiB/5430msec); 0 zone resets 00:08:55.053 slat (usec): min=15, max=3187, avg=69.10, stdev=170.89 00:08:55.053 clat (usec): min=2687, max=19569, avg=7840.04, stdev=1484.36 00:08:55.053 lat (usec): min=2714, max=19594, avg=7909.14, stdev=1490.29 00:08:55.053 clat percentiles (usec): 00:08:55.053 | 1.00th=[ 3490], 5.00th=[ 4621], 10.00th=[ 6194], 20.00th=[ 7177], 00:08:55.053 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8160], 00:08:55.053 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[ 9765], 00:08:55.053 | 99.00th=[12256], 99.50th=[12911], 99.90th=[14353], 99.95th=[16712], 00:08:55.053 | 99.99th=[17957] 00:08:55.053 bw ( KiB/s): min= 9240, max=25328, per=88.41%, avg=19850.55, stdev=5188.75, samples=11 00:08:55.053 iops : min= 2310, max= 6332, avg=4962.64, stdev=1297.19, samples=11 00:08:55.053 lat (msec) : 2=0.01%, 4=1.16%, 10=86.86%, 20=11.97% 00:08:55.053 cpu : usr=5.31%, sys=20.93%, ctx=5109, majf=0, minf=78 00:08:55.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:55.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.053 issued rwts: total=57590,30480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.053 00:08:55.053 Run status group 0 (all jobs): 00:08:55.053 READ: bw=37.4MiB/s (39.3MB/s), 37.4MiB/s-37.4MiB/s (39.3MB/s-39.3MB/s), io=225MiB (236MB), run=6007-6007msec 00:08:55.053 WRITE: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=119MiB (125MB), run=5430-5430msec 00:08:55.053 00:08:55.053 Disk stats (read/write): 00:08:55.053 nvme0n1: ios=57017/29672, merge=0/0, ticks=496729/218596, in_queue=715325, util=98.66% 00:08:55.053 11:32:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:55.053 11:32:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67376 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:55.053 11:32:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:55.053 [global] 00:08:55.053 thread=1 00:08:55.053 invalidate=1 00:08:55.053 rw=randrw 00:08:55.053 time_based=1 00:08:55.053 runtime=6 00:08:55.053 ioengine=libaio 00:08:55.053 direct=1 00:08:55.053 bs=4096 00:08:55.053 iodepth=128 00:08:55.053 norandommap=0 00:08:55.053 numjobs=1 00:08:55.053 00:08:55.053 verify_dump=1 00:08:55.053 verify_backlog=512 00:08:55.053 verify_state_save=0 00:08:55.053 do_verify=1 00:08:55.053 verify=crc32c-intel 00:08:55.053 [job0] 00:08:55.053 filename=/dev/nvme0n1 00:08:55.053 Could not set queue depth (nvme0n1) 00:08:55.053 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:55.053 fio-3.35 00:08:55.053 Starting 1 thread 00:08:55.620 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:55.878 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:56.137 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:56.461 11:32:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:56.720 11:33:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67376 00:09:00.908 00:09:00.908 job0: (groupid=0, jobs=1): err= 0: pid=67397: Fri Jul 12 11:33:04 2024 00:09:00.908 read: IOPS=9640, BW=37.7MiB/s (39.5MB/s)(226MiB/6007msec) 00:09:00.908 slat (usec): min=7, max=7066, avg=53.75, stdev=231.54 00:09:00.908 clat (usec): min=349, max=19316, avg=9247.49, stdev=2336.69 00:09:00.908 lat (usec): min=370, max=19326, avg=9301.24, stdev=2345.73 00:09:00.908 clat percentiles (usec): 00:09:00.908 | 1.00th=[ 2638], 5.00th=[ 4752], 10.00th=[ 6259], 20.00th=[ 8094], 00:09:00.908 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9634], 00:09:00.908 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[11600], 95.00th=[13698], 00:09:00.908 | 99.00th=[15401], 99.50th=[16057], 99.90th=[17957], 99.95th=[18482], 00:09:00.908 | 99.99th=[19006] 00:09:00.908 bw ( KiB/s): min= 7152, max=26896, per=50.62%, avg=19521.45, stdev=6213.77, samples=11 00:09:00.908 iops : min= 1788, max= 6724, avg=4880.36, stdev=1553.44, samples=11 00:09:00.908 write: IOPS=5544, BW=21.7MiB/s (22.7MB/s)(115MiB/5330msec); 0 zone resets 00:09:00.908 slat (usec): min=15, max=1914, avg=61.75, stdev=158.51 00:09:00.908 clat (usec): min=1069, max=17739, avg=7683.21, stdev=2063.06 00:09:00.908 lat (usec): min=1099, max=17765, avg=7744.96, stdev=2074.69 00:09:00.908 clat percentiles (usec): 00:09:00.908 | 1.00th=[ 2474], 5.00th=[ 3720], 10.00th=[ 4490], 20.00th=[ 5800], 00:09:00.908 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8225], 60.00th=[ 8455], 00:09:00.908 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9503], 95.00th=[10028], 00:09:00.908 | 99.00th=[13042], 99.50th=[13698], 99.90th=[15795], 99.95th=[16450], 00:09:00.908 | 99.99th=[16712] 00:09:00.908 bw ( KiB/s): min= 7592, max=26832, per=88.24%, avg=19569.27, stdev=6066.92, samples=11 00:09:00.908 iops : min= 1898, max= 6708, avg=4892.27, stdev=1516.75, samples=11 00:09:00.908 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.04% 00:09:00.908 lat (msec) : 2=0.39%, 4=3.88%, 10=76.36%, 20=19.28% 00:09:00.908 cpu : usr=5.13%, sys=20.93%, ctx=5072, majf=0, minf=78 00:09:00.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:00.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.908 issued rwts: total=57913,29552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.908 00:09:00.908 Run status group 0 (all jobs): 00:09:00.908 READ: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=226MiB (237MB), run=6007-6007msec 00:09:00.908 WRITE: bw=21.7MiB/s (22.7MB/s), 21.7MiB/s-21.7MiB/s (22.7MB/s-22.7MB/s), io=115MiB (121MB), run=5330-5330msec 00:09:00.908 00:09:00.908 Disk stats (read/write): 00:09:00.908 nvme0n1: ios=57299/28903, merge=0/0, ticks=509037/208396, in_queue=717433, util=98.72% 00:09:00.908 11:33:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:01.168 11:33:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:01.168 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:01.168 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:01.168 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.168 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:01.168 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.168 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:01.168 11:33:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.438 rmmod nvme_tcp 00:09:01.438 rmmod nvme_fabrics 00:09:01.438 rmmod nvme_keyring 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67180 ']' 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67180 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67180 ']' 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67180 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67180 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:01.438 killing process with pid 67180 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67180' 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67180 00:09:01.438 11:33:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67180 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:01.696 ************************************ 00:09:01.696 END TEST nvmf_target_multipath 00:09:01.696 ************************************ 00:09:01.696 00:09:01.696 real 0m19.271s 00:09:01.696 user 1m12.757s 00:09:01.696 sys 0m8.780s 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.696 11:33:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:01.696 11:33:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.696 11:33:05 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:01.696 11:33:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.696 11:33:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.696 11:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.696 ************************************ 00:09:01.696 START TEST nvmf_zcopy 00:09:01.696 ************************************ 00:09:01.696 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:01.955 * Looking for test storage... 00:09:01.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:01.955 Cannot find device "nvmf_tgt_br" 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.955 Cannot find device "nvmf_tgt_br2" 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:01.955 Cannot find device "nvmf_tgt_br" 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:01.955 Cannot find device "nvmf_tgt_br2" 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:01.955 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:02.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:09:02.213 00:09:02.213 --- 10.0.0.2 ping statistics --- 00:09:02.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.213 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:02.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:02.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:09:02.213 00:09:02.213 --- 10.0.0.3 ping statistics --- 00:09:02.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.213 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:02.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:02.213 00:09:02.213 --- 10.0.0.1 ping statistics --- 00:09:02.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.213 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67642 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67642 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67642 ']' 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.213 11:33:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.213 [2024-07-12 11:33:05.645867] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:09:02.213 [2024-07-12 11:33:05.645958] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.471 [2024-07-12 11:33:05.783313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.471 [2024-07-12 11:33:05.909339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.471 [2024-07-12 11:33:05.909407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.471 [2024-07-12 11:33:05.909421] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.471 [2024-07-12 11:33:05.909432] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.471 [2024-07-12 11:33:05.909441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.471 [2024-07-12 11:33:05.909488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.729 [2024-07-12 11:33:05.967436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.295 [2024-07-12 11:33:06.677334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.295 [2024-07-12 11:33:06.693397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.295 malloc0 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.295 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.295 { 00:09:03.295 "params": { 00:09:03.295 "name": "Nvme$subsystem", 00:09:03.295 "trtype": "$TEST_TRANSPORT", 00:09:03.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.295 "adrfam": "ipv4", 00:09:03.295 "trsvcid": "$NVMF_PORT", 00:09:03.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.295 "hdgst": ${hdgst:-false}, 00:09:03.295 "ddgst": ${ddgst:-false} 00:09:03.296 }, 00:09:03.296 "method": "bdev_nvme_attach_controller" 00:09:03.296 } 00:09:03.296 EOF 00:09:03.296 )") 00:09:03.296 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:03.296 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:03.553 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:03.553 11:33:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.553 "params": { 00:09:03.553 "name": "Nvme1", 00:09:03.553 "trtype": "tcp", 00:09:03.553 "traddr": "10.0.0.2", 00:09:03.553 "adrfam": "ipv4", 00:09:03.553 "trsvcid": "4420", 00:09:03.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.553 "hdgst": false, 00:09:03.553 "ddgst": false 00:09:03.553 }, 00:09:03.553 "method": "bdev_nvme_attach_controller" 00:09:03.553 }' 00:09:03.553 [2024-07-12 11:33:06.795779] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:09:03.553 [2024-07-12 11:33:06.795917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67675 ] 00:09:03.553 [2024-07-12 11:33:06.937042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.811 [2024-07-12 11:33:07.062480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.811 [2024-07-12 11:33:07.127537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.811 Running I/O for 10 seconds... 00:09:16.014 00:09:16.014 Latency(us) 00:09:16.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.014 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:16.014 Verification LBA range: start 0x0 length 0x1000 00:09:16.014 Nvme1n1 : 10.02 5972.38 46.66 0.00 0.00 21364.81 2636.33 31695.59 00:09:16.014 =================================================================================================================== 00:09:16.014 Total : 5972.38 46.66 0.00 0.00 21364.81 2636.33 31695.59 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67797 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:16.014 { 00:09:16.014 "params": { 00:09:16.014 "name": "Nvme$subsystem", 00:09:16.014 "trtype": "$TEST_TRANSPORT", 00:09:16.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.014 "adrfam": "ipv4", 00:09:16.014 "trsvcid": "$NVMF_PORT", 00:09:16.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.014 "hdgst": ${hdgst:-false}, 00:09:16.014 "ddgst": ${ddgst:-false} 00:09:16.014 }, 00:09:16.014 "method": "bdev_nvme_attach_controller" 00:09:16.014 } 00:09:16.014 EOF 00:09:16.014 )") 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:16.014 [2024-07-12 11:33:17.494021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.014 [2024-07-12 11:33:17.494062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:16.014 11:33:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:16.014 "params": { 00:09:16.014 "name": "Nvme1", 00:09:16.014 "trtype": "tcp", 00:09:16.014 "traddr": "10.0.0.2", 00:09:16.014 "adrfam": "ipv4", 00:09:16.014 "trsvcid": "4420", 00:09:16.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.014 "hdgst": false, 00:09:16.015 "ddgst": false 00:09:16.015 }, 00:09:16.015 "method": "bdev_nvme_attach_controller" 00:09:16.015 }' 00:09:16.015 [2024-07-12 11:33:17.505974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.506013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.513987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.514010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.525974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.525997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.533976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.534000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.541980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.542018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.549985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.550007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.551514] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:09:16.015 [2024-07-12 11:33:17.551606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67797 ] 00:09:16.015 [2024-07-12 11:33:17.561995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.562017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.574014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.574037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.586003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.586040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.598008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.598033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.610060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.610089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.622055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.622082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.634049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.634073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.646062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.646084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.658067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.658090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.670076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.670102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.682068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.682110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.693158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.015 [2024-07-12 11:33:17.694072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.694094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.706095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.706128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.718080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.718105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.730087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.730114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.742106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.742139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.754114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.754150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.766119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.766155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.778126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.778168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.790119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.790151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.802130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.802164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.809346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.015 [2024-07-12 11:33:17.814115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.814140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.826123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.826152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.838129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.838162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.850133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.850166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.862138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.862172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.870868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.015 [2024-07-12 11:33:17.874135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.874161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.886142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.886174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.898135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.898164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.910130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.910154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.922153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.922183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.934163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.934191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.946168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.946196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.958183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.958210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.970196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.970222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:17.982214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.982244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 Running I/O for 5 seconds... 00:09:16.015 [2024-07-12 11:33:17.994221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:17.994248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.011114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.011147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.030085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.030117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.044466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.044495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.060058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.060088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.069487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.069516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.085016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.085045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.101251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.101284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.120658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.120689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.135198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.135229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.147331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.147361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.015 [2024-07-12 11:33:18.162777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.015 [2024-07-12 11:33:18.162807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.178918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.178949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.188470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.188501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.201185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.201216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.217785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.217817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.233727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.233757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.251319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.251349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.267348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.267378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.286389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.286420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.300771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.300800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.316156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.316187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.325728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.325769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.341260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.341292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.358025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.358057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.374719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.374749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.391211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.391249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.407449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.407480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.424485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.424516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.440696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.440727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.457684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.457714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.474447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.474479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.492001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.492033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.507433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.507465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.523173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.523203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.539211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.539249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.558524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.558555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.573022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.573052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.590691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.590721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.607326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.607358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.622956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.622987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.632158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.632190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.648143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.648175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.657701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.657730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.674286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.674318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.691087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.691117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.707670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.707701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.724862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.724895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.740470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.740502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.757683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.757714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.774413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.774445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.790453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.790483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.807968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.807999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.822803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.822836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.839147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.839178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.855229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.855270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.872909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.872940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.887721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.887751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.903683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.903713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.920336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.920368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.937139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.937170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.954265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.954296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.970034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.970065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:18.988661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:18.988691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:19.003262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:19.003292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:19.012686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:19.012715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:19.028934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:19.028967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:19.046393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:19.046424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:19.062191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:19.062236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:19.079603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.016 [2024-07-12 11:33:19.079636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.016 [2024-07-12 11:33:19.095098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.095131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.104694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.104724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.117729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.117756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.132515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.132545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.149815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.149847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.165909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.165938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.183093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.183122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.200195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.200224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.216712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.216740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.235536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.235570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.250136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.250166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.260501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.260545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.276145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.276176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.292383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.292414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.311381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.311412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.325898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.325929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.343304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.343340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.359237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.359316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.376965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.377010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.392057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.392101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.401983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.402023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.417199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.417252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.433820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.433866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.017 [2024-07-12 11:33:19.450166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.017 [2024-07-12 11:33:19.450214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.275 [2024-07-12 11:33:19.466294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.275 [2024-07-12 11:33:19.466340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.275 [2024-07-12 11:33:19.485231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.485280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.500172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.500219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.515371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.515423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.525395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.525434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.540062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.540107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.552112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.552156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.567599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.567641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.577525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.577565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.593221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.593267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.611017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.611064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.625838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.625883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.642018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.642065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.659399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.659448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.675464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.675511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.685478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.685518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.700478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.700523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.276 [2024-07-12 11:33:19.719651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.276 [2024-07-12 11:33:19.719702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.734656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.734701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.751693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.751738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.769111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.769159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.783768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.783814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.799145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.799189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.808297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.808338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.824481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.824526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.840693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.840739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.858062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.858131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.873427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.873484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.888520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.888567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.904164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.904210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.921484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.921530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.938386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.938433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.955255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.955305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.971311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.971359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.533 [2024-07-12 11:33:19.980737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.533 [2024-07-12 11:33:19.980785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:19.996696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:19.996742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.013403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.013454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.030067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.030111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.047353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.047400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.062005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.062054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.077615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.077661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.095100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.095146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.111199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.111252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.128894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.128941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.143320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.143370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.158975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.159022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.176325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.176377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.192621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.192667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.210127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.210174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.791 [2024-07-12 11:33:20.227209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.791 [2024-07-12 11:33:20.227264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.241883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.241926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.259455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.259498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.274300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.274345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.284171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.284211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.300426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.300629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.317948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.318207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.333441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.333497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.343328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.343375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.358380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.358430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.374794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.374868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.392779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.392835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.407815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.407862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.417448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.417491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.432968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.433020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.449938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.449993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.467822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.467872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.049 [2024-07-12 11:33:20.482440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.049 [2024-07-12 11:33:20.482492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.498280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.498335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.515639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.515692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.531827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.531878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.549991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.550044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.564757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.564809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.580598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.580651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.598880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.598937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.613263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.613314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.631019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.631073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.645546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.645609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.661559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.661621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.678890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.678943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.693932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.693984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.704262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.704307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.719589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.719638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.735637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.735691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.306 [2024-07-12 11:33:20.753064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.306 [2024-07-12 11:33:20.753124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.563 [2024-07-12 11:33:20.768255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.563 [2024-07-12 11:33:20.768305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.563 [2024-07-12 11:33:20.783765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.563 [2024-07-12 11:33:20.783815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.563 [2024-07-12 11:33:20.800926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.563 [2024-07-12 11:33:20.800979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.563 [2024-07-12 11:33:20.817621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.563 [2024-07-12 11:33:20.817673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.563 [2024-07-12 11:33:20.833759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.563 [2024-07-12 11:33:20.833807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.563 [2024-07-12 11:33:20.852993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.563 [2024-07-12 11:33:20.853046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.563 [2024-07-12 11:33:20.867476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.563 [2024-07-12 11:33:20.867525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.563 [2024-07-12 11:33:20.876748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.563 [2024-07-12 11:33:20.876791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.563 [2024-07-12 11:33:20.893156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.564 [2024-07-12 11:33:20.893208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.564 [2024-07-12 11:33:20.910311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.564 [2024-07-12 11:33:20.910369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.564 [2024-07-12 11:33:20.927202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.564 [2024-07-12 11:33:20.927262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.564 [2024-07-12 11:33:20.943045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.564 [2024-07-12 11:33:20.943095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.564 [2024-07-12 11:33:20.952478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.564 [2024-07-12 11:33:20.952523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.564 [2024-07-12 11:33:20.968854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.564 [2024-07-12 11:33:20.968904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.564 [2024-07-12 11:33:20.986351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.564 [2024-07-12 11:33:20.986411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.564 [2024-07-12 11:33:21.001254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.564 [2024-07-12 11:33:21.001309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.019173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.019228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.033937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.033990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.043424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.043480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.059919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.059970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.077274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.077328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.093100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.093153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.102912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.102962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.118920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.118976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.135962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.136019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.152453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.152502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.169546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.169617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.186959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.187015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.202776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.202833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.212996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.213049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.228079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.228139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.245478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.245540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.822 [2024-07-12 11:33:21.261795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.822 [2024-07-12 11:33:21.261854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.277609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.277670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.287655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.287705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.303972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.304034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.320075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.320136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.337263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.337306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.353841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.353884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.371280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.371323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.387928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.387976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.404693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.404742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.420499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.420551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.429648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.429684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.445648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.445693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.454744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.454778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.471336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.471382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.488551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.488608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.505568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.505635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.080 [2024-07-12 11:33:21.522017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.080 [2024-07-12 11:33:21.522062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.337 [2024-07-12 11:33:21.538032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.337 [2024-07-12 11:33:21.538073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.337 [2024-07-12 11:33:21.554603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.337 [2024-07-12 11:33:21.554642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.337 [2024-07-12 11:33:21.573905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.573941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.588640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.588672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.608076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.608112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.622187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.622242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.637217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.637274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.652921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.652980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.670547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.670605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.684970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.685014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.702573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.702638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.717531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.717589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.726956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.727002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.741389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.741441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.756605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.756649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.767762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.767801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.338 [2024-07-12 11:33:21.784962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.338 [2024-07-12 11:33:21.785015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.799695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.799744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.816032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.816081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.832833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.832887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.849764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.849807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.865940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.865988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.883025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.883075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.899806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.899853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.915364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.915412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.931259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.931303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.948783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.948832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.965432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.965482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.981615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.981663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:21.998755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:21.998805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:22.016054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:22.016100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.595 [2024-07-12 11:33:22.032930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.595 [2024-07-12 11:33:22.032985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.048868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.048926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.058418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.058467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.074342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.074392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.084085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.084121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.100731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.100787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.116649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.116705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.134091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.134146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.149666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.149707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.167983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.168039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.182421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.182476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.200038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.200093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.214785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.214823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.224759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.224795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.239675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.239711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.255513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.255555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.274090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.274132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.853 [2024-07-12 11:33:22.288532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.853 [2024-07-12 11:33:22.288570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.304473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.304511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.319870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.319910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.335162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.335199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.345102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.345137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.359898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.359935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.378946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.378982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.393694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.393729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.412960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.412996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.427618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.427653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.446833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.446869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.461771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.461805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.478797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.478846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.495982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.496027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.513642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.513685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.528446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.528485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.111 [2024-07-12 11:33:22.543806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.111 [2024-07-12 11:33:22.543844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.560733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.560772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.577980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.578017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.587941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.587976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.598784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.598818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.610970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.611005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.620737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.620771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.635338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.635374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.650388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.650424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.659235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.659278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.675642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.675677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.693356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.693393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.708387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.708423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.725766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.725801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.740052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.740086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.755848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.755883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.772066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.772101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.789768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.789805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.369 [2024-07-12 11:33:22.803975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.369 [2024-07-12 11:33:22.804010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.819199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.819234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.828361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.828395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.844469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.844504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.856223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.856256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.871600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.871636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.890320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.890356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.904880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.904915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.922086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.922121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.938958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.938992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.954836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.954871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.971847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.971884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:22.989107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:22.989144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 00:09:19.628 Latency(us) 00:09:19.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.628 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:19.628 Nvme1n1 : 5.01 11741.73 91.73 0.00 0.00 10889.50 4766.25 21209.83 00:09:19.628 =================================================================================================================== 00:09:19.628 Total : 11741.73 91.73 0.00 0.00 10889.50 4766.25 21209.83 00:09:19.628 [2024-07-12 11:33:23.000552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:23.000600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:23.012555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:23.012599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:23.024590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:23.024629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:23.036594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:23.036634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:23.048596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:23.048635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:23.060602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:23.060642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.628 [2024-07-12 11:33:23.072602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.628 [2024-07-12 11:33:23.072640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.084606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.084643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.096606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.096645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.108612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.108650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.120616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.120654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.132607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.132638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.144608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.144639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.156630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.156669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.168625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.168662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.180619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.180651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.192653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.192700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.204631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.204666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 [2024-07-12 11:33:23.216617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.886 [2024-07-12 11:33:23.216645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.886 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67797) - No such process 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67797 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.886 delay0 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.886 11:33:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:20.144 [2024-07-12 11:33:23.410114] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:26.705 Initializing NVMe Controllers 00:09:26.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:26.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:26.705 Initialization complete. Launching workers. 00:09:26.705 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 67 00:09:26.705 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 354, failed to submit 33 00:09:26.705 success 209, unsuccess 145, failed 0 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.705 rmmod nvme_tcp 00:09:26.705 rmmod nvme_fabrics 00:09:26.705 rmmod nvme_keyring 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67642 ']' 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67642 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67642 ']' 00:09:26.705 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67642 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67642 00:09:26.706 killing process with pid 67642 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67642' 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67642 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67642 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:26.706 ************************************ 00:09:26.706 END TEST nvmf_zcopy 00:09:26.706 ************************************ 00:09:26.706 00:09:26.706 real 0m24.738s 00:09:26.706 user 0m40.788s 00:09:26.706 sys 0m6.655s 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.706 11:33:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.706 11:33:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:26.706 11:33:29 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.706 11:33:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:26.706 11:33:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.706 11:33:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:26.706 ************************************ 00:09:26.706 START TEST nvmf_nmic 00:09:26.706 ************************************ 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.706 * Looking for test storage... 00:09:26.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.706 11:33:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:26.706 Cannot find device "nvmf_tgt_br" 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.706 Cannot find device "nvmf_tgt_br2" 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:26.706 Cannot find device "nvmf_tgt_br" 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:26.706 Cannot find device "nvmf_tgt_br2" 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:26.706 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:26.707 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:26.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:09:26.966 00:09:26.966 --- 10.0.0.2 ping statistics --- 00:09:26.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.966 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:26.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:26.966 00:09:26.966 --- 10.0.0.3 ping statistics --- 00:09:26.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.966 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:26.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:26.966 00:09:26.966 --- 10.0.0.1 ping statistics --- 00:09:26.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.966 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68115 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68115 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68115 ']' 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.966 11:33:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.224 [2024-07-12 11:33:30.415693] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:09:27.224 [2024-07-12 11:33:30.415778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.224 [2024-07-12 11:33:30.550671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.224 [2024-07-12 11:33:30.667794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.224 [2024-07-12 11:33:30.667842] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.224 [2024-07-12 11:33:30.667853] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.224 [2024-07-12 11:33:30.667862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.224 [2024-07-12 11:33:30.667869] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.224 [2024-07-12 11:33:30.668010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.224 [2024-07-12 11:33:30.668070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.224 [2024-07-12 11:33:30.668716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.224 [2024-07-12 11:33:30.668761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.482 [2024-07-12 11:33:30.723762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 [2024-07-12 11:33:31.438113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.048 Malloc0 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.048 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.306 [2024-07-12 11:33:31.508799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.306 test case1: single bdev can't be used in multiple subsystems 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.306 [2024-07-12 11:33:31.532639] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:28.306 [2024-07-12 11:33:31.532673] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:28.306 [2024-07-12 11:33:31.532685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.306 request: 00:09:28.306 { 00:09:28.306 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:28.306 "namespace": { 00:09:28.306 "bdev_name": "Malloc0", 00:09:28.306 "no_auto_visible": false 00:09:28.306 }, 00:09:28.306 "method": "nvmf_subsystem_add_ns", 00:09:28.306 "req_id": 1 00:09:28.306 } 00:09:28.306 Got JSON-RPC error response 00:09:28.306 response: 00:09:28.306 { 00:09:28.306 "code": -32602, 00:09:28.306 "message": "Invalid parameters" 00:09:28.306 } 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:28.306 Adding namespace failed - expected result. 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:28.306 test case2: host connect to nvmf target in multiple paths 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.306 [2024-07-12 11:33:31.544759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.306 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:28.564 11:33:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:28.564 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:28.564 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.564 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:28.564 11:33:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:30.461 11:33:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:30.461 11:33:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:30.461 11:33:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.461 11:33:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:30.461 11:33:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.461 11:33:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:30.461 11:33:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:30.461 [global] 00:09:30.461 thread=1 00:09:30.461 invalidate=1 00:09:30.461 rw=write 00:09:30.461 time_based=1 00:09:30.461 runtime=1 00:09:30.461 ioengine=libaio 00:09:30.461 direct=1 00:09:30.461 bs=4096 00:09:30.461 iodepth=1 00:09:30.461 norandommap=0 00:09:30.461 numjobs=1 00:09:30.461 00:09:30.461 verify_dump=1 00:09:30.461 verify_backlog=512 00:09:30.461 verify_state_save=0 00:09:30.461 do_verify=1 00:09:30.461 verify=crc32c-intel 00:09:30.461 [job0] 00:09:30.461 filename=/dev/nvme0n1 00:09:30.461 Could not set queue depth (nvme0n1) 00:09:30.719 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.719 fio-3.35 00:09:30.719 Starting 1 thread 00:09:32.093 00:09:32.093 job0: (groupid=0, jobs=1): err= 0: pid=68207: Fri Jul 12 11:33:35 2024 00:09:32.093 read: IOPS=2946, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:09:32.093 slat (nsec): min=12053, max=41493, avg=14650.78, stdev=2985.51 00:09:32.093 clat (usec): min=145, max=297, avg=182.85, stdev=15.01 00:09:32.093 lat (usec): min=159, max=310, avg=197.50, stdev=15.42 00:09:32.093 clat percentiles (usec): 00:09:32.093 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:09:32.093 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:09:32.093 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:09:32.093 | 99.00th=[ 229], 99.50th=[ 241], 99.90th=[ 260], 99.95th=[ 277], 00:09:32.093 | 99.99th=[ 297] 00:09:32.093 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:32.093 slat (usec): min=17, max=109, avg=22.41, stdev= 5.88 00:09:32.093 clat (usec): min=88, max=261, avg=110.15, stdev=12.73 00:09:32.093 lat (usec): min=107, max=329, avg=132.56, stdev=15.03 00:09:32.093 clat percentiles (usec): 00:09:32.093 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 101], 00:09:32.093 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 112], 00:09:32.093 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 125], 95.00th=[ 133], 00:09:32.093 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 192], 99.95th=[ 247], 00:09:32.093 | 99.99th=[ 262] 00:09:32.093 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:32.093 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:32.093 lat (usec) : 100=9.57%, 250=90.33%, 500=0.10% 00:09:32.093 cpu : usr=1.40%, sys=9.60%, ctx=6021, majf=0, minf=2 00:09:32.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.093 issued rwts: total=2949,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.093 00:09:32.093 Run status group 0 (all jobs): 00:09:32.093 READ: bw=11.5MiB/s (12.1MB/s), 11.5MiB/s-11.5MiB/s (12.1MB/s-12.1MB/s), io=11.5MiB (12.1MB), run=1001-1001msec 00:09:32.093 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:32.093 00:09:32.093 Disk stats (read/write): 00:09:32.093 nvme0n1: ios=2610/2921, merge=0/0, ticks=494/356, in_queue=850, util=91.38% 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.093 rmmod nvme_tcp 00:09:32.093 rmmod nvme_fabrics 00:09:32.093 rmmod nvme_keyring 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68115 ']' 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68115 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68115 ']' 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68115 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68115 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.093 killing process with pid 68115 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68115' 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68115 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68115 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.093 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.352 11:33:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:32.352 ************************************ 00:09:32.352 END TEST nvmf_nmic 00:09:32.352 ************************************ 00:09:32.352 00:09:32.352 real 0m5.653s 00:09:32.352 user 0m18.059s 00:09:32.352 sys 0m2.251s 00:09:32.352 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.352 11:33:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.352 11:33:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:32.352 11:33:35 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:32.352 11:33:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:32.352 11:33:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.352 11:33:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:32.353 ************************************ 00:09:32.353 START TEST nvmf_fio_target 00:09:32.353 ************************************ 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:32.353 * Looking for test storage... 00:09:32.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:32.353 Cannot find device "nvmf_tgt_br" 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.353 Cannot find device "nvmf_tgt_br2" 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:32.353 Cannot find device "nvmf_tgt_br" 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:32.353 Cannot find device "nvmf_tgt_br2" 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:32.353 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:32.611 11:33:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:32.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:32.611 00:09:32.611 --- 10.0.0.2 ping statistics --- 00:09:32.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.611 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:32.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:09:32.611 00:09:32.611 --- 10.0.0.3 ping statistics --- 00:09:32.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.611 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:32.611 00:09:32.611 --- 10.0.0.1 ping statistics --- 00:09:32.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.611 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.611 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.869 11:33:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:32.869 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.869 11:33:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.869 11:33:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.869 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68385 00:09:32.869 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.870 11:33:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68385 00:09:32.870 11:33:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68385 ']' 00:09:32.870 11:33:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.870 11:33:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.870 11:33:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.870 11:33:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.870 11:33:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.870 [2024-07-12 11:33:36.116019] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:09:32.870 [2024-07-12 11:33:36.116101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.870 [2024-07-12 11:33:36.250402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.127 [2024-07-12 11:33:36.374376] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.127 [2024-07-12 11:33:36.374439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.127 [2024-07-12 11:33:36.374457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.127 [2024-07-12 11:33:36.374472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.127 [2024-07-12 11:33:36.374485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.127 [2024-07-12 11:33:36.374618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.127 [2024-07-12 11:33:36.375121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.127 [2024-07-12 11:33:36.375289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.127 [2024-07-12 11:33:36.375305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.127 [2024-07-12 11:33:36.433535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.694 11:33:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:33.694 11:33:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:33.694 11:33:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.694 11:33:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:33.694 11:33:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.694 11:33:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.694 11:33:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:33.952 [2024-07-12 11:33:37.331731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.952 11:33:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.517 11:33:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:34.517 11:33:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.517 11:33:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:34.517 11:33:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:34.775 11:33:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:34.775 11:33:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.341 11:33:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:35.341 11:33:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:35.598 11:33:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:35.857 11:33:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:35.857 11:33:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.115 11:33:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:36.115 11:33:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.372 11:33:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:36.372 11:33:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:36.630 11:33:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:36.888 11:33:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:36.888 11:33:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.145 11:33:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:37.145 11:33:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.403 11:33:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.660 [2024-07-12 11:33:40.895363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.660 11:33:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:37.918 11:33:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:38.176 11:33:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:38.176 11:33:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:38.176 11:33:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.176 11:33:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.176 11:33:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:38.176 11:33:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:38.176 11:33:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.707 11:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.707 11:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.707 11:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.707 11:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:40.707 11:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.707 11:33:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:40.707 11:33:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:40.707 [global] 00:09:40.707 thread=1 00:09:40.707 invalidate=1 00:09:40.707 rw=write 00:09:40.707 time_based=1 00:09:40.707 runtime=1 00:09:40.708 ioengine=libaio 00:09:40.708 direct=1 00:09:40.708 bs=4096 00:09:40.708 iodepth=1 00:09:40.708 norandommap=0 00:09:40.708 numjobs=1 00:09:40.708 00:09:40.708 verify_dump=1 00:09:40.708 verify_backlog=512 00:09:40.708 verify_state_save=0 00:09:40.708 do_verify=1 00:09:40.708 verify=crc32c-intel 00:09:40.708 [job0] 00:09:40.708 filename=/dev/nvme0n1 00:09:40.708 [job1] 00:09:40.708 filename=/dev/nvme0n2 00:09:40.708 [job2] 00:09:40.708 filename=/dev/nvme0n3 00:09:40.708 [job3] 00:09:40.708 filename=/dev/nvme0n4 00:09:40.708 Could not set queue depth (nvme0n1) 00:09:40.708 Could not set queue depth (nvme0n2) 00:09:40.708 Could not set queue depth (nvme0n3) 00:09:40.708 Could not set queue depth (nvme0n4) 00:09:40.708 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.708 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.708 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.708 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.708 fio-3.35 00:09:40.708 Starting 4 threads 00:09:41.644 00:09:41.644 job0: (groupid=0, jobs=1): err= 0: pid=68568: Fri Jul 12 11:33:45 2024 00:09:41.644 read: IOPS=1672, BW=6689KiB/s (6850kB/s)(6696KiB/1001msec) 00:09:41.644 slat (usec): min=13, max=398, avg=21.08, stdev= 9.92 00:09:41.644 clat (usec): min=153, max=890, avg=320.85, stdev=84.89 00:09:41.644 lat (usec): min=174, max=910, avg=341.93, stdev=86.02 00:09:41.644 clat percentiles (usec): 00:09:41.644 | 1.00th=[ 184], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:09:41.644 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 302], 00:09:41.644 | 70.00th=[ 347], 80.00th=[ 388], 90.00th=[ 474], 95.00th=[ 486], 00:09:41.644 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 881], 99.95th=[ 889], 00:09:41.644 | 99.99th=[ 889] 00:09:41.644 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:41.644 slat (usec): min=18, max=444, avg=29.70, stdev=11.47 00:09:41.644 clat (usec): min=92, max=2108, avg=174.84, stdev=81.39 00:09:41.644 lat (usec): min=121, max=2145, avg=204.54, stdev=84.26 00:09:41.644 clat percentiles (usec): 00:09:41.644 | 1.00th=[ 102], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 117], 00:09:41.644 | 30.00th=[ 122], 40.00th=[ 133], 50.00th=[ 159], 60.00th=[ 172], 00:09:41.644 | 70.00th=[ 186], 80.00th=[ 217], 90.00th=[ 310], 95.00th=[ 326], 00:09:41.644 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 400], 99.95th=[ 474], 00:09:41.644 | 99.99th=[ 2114] 00:09:41.644 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:09:41.644 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:41.644 lat (usec) : 100=0.30%, 250=50.46%, 500=48.47%, 750=0.67%, 1000=0.08% 00:09:41.644 lat (msec) : 4=0.03% 00:09:41.644 cpu : usr=1.90%, sys=7.50%, ctx=3724, majf=0, minf=13 00:09:41.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.644 issued rwts: total=1674,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.644 job1: (groupid=0, jobs=1): err= 0: pid=68569: Fri Jul 12 11:33:45 2024 00:09:41.644 read: IOPS=1722, BW=6889KiB/s (7054kB/s)(6896KiB/1001msec) 00:09:41.644 slat (nsec): min=14466, max=60410, avg=20901.42, stdev=7012.18 00:09:41.644 clat (usec): min=166, max=658, avg=300.44, stdev=75.81 00:09:41.644 lat (usec): min=185, max=691, avg=321.34, stdev=79.59 00:09:41.644 clat percentiles (usec): 00:09:41.644 | 1.00th=[ 198], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 251], 00:09:41.644 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:09:41.644 | 70.00th=[ 302], 80.00th=[ 322], 90.00th=[ 441], 95.00th=[ 490], 00:09:41.644 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 594], 99.95th=[ 660], 00:09:41.644 | 99.99th=[ 660] 00:09:41.644 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:41.644 slat (usec): min=16, max=105, avg=25.10, stdev= 4.86 00:09:41.644 clat (usec): min=93, max=1781, avg=188.52, stdev=54.43 00:09:41.644 lat (usec): min=116, max=1803, avg=213.63, stdev=55.14 00:09:41.644 clat percentiles (usec): 00:09:41.644 | 1.00th=[ 98], 5.00th=[ 106], 10.00th=[ 114], 20.00th=[ 167], 00:09:41.644 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:09:41.644 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 245], 00:09:41.644 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 318], 99.95th=[ 494], 00:09:41.644 | 99.99th=[ 1778] 00:09:41.644 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:09:41.644 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:41.644 lat (usec) : 100=1.06%, 250=59.49%, 500=37.70%, 750=1.72% 00:09:41.644 lat (msec) : 2=0.03% 00:09:41.644 cpu : usr=1.90%, sys=6.80%, ctx=3776, majf=0, minf=9 00:09:41.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.644 issued rwts: total=1724,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.644 job2: (groupid=0, jobs=1): err= 0: pid=68575: Fri Jul 12 11:33:45 2024 00:09:41.644 read: IOPS=1902, BW=7608KiB/s (7791kB/s)(7616KiB/1001msec) 00:09:41.644 slat (nsec): min=11698, max=58950, avg=17085.06, stdev=6873.64 00:09:41.644 clat (usec): min=153, max=867, avg=311.19, stdev=82.86 00:09:41.644 lat (usec): min=168, max=886, avg=328.27, stdev=87.32 00:09:41.644 clat percentiles (usec): 00:09:41.644 | 1.00th=[ 182], 5.00th=[ 231], 10.00th=[ 245], 20.00th=[ 258], 00:09:41.644 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 289], 00:09:41.644 | 70.00th=[ 326], 80.00th=[ 363], 90.00th=[ 453], 95.00th=[ 469], 00:09:41.644 | 99.00th=[ 502], 99.50th=[ 603], 99.90th=[ 816], 99.95th=[ 865], 00:09:41.644 | 99.99th=[ 865] 00:09:41.644 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:41.644 slat (usec): min=15, max=251, avg=20.49, stdev= 7.13 00:09:41.644 clat (usec): min=98, max=1856, avg=158.86, stdev=53.55 00:09:41.644 lat (usec): min=117, max=1878, avg=179.34, stdev=54.43 00:09:41.644 clat percentiles (usec): 00:09:41.644 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 122], 00:09:41.644 | 30.00th=[ 128], 40.00th=[ 135], 50.00th=[ 147], 60.00th=[ 178], 00:09:41.644 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 212], 00:09:41.644 | 99.00th=[ 237], 99.50th=[ 251], 99.90th=[ 506], 99.95th=[ 553], 00:09:41.644 | 99.99th=[ 1860] 00:09:41.644 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:09:41.644 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:41.644 lat (usec) : 100=0.08%, 250=58.05%, 500=41.19%, 750=0.58%, 1000=0.08% 00:09:41.644 lat (msec) : 2=0.03% 00:09:41.644 cpu : usr=1.10%, sys=6.40%, ctx=3960, majf=0, minf=8 00:09:41.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.644 issued rwts: total=1904,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.644 job3: (groupid=0, jobs=1): err= 0: pid=68577: Fri Jul 12 11:33:45 2024 00:09:41.644 read: IOPS=1674, BW=6697KiB/s (6858kB/s)(6704KiB/1001msec) 00:09:41.644 slat (nsec): min=12116, max=60644, avg=20571.07, stdev=3747.95 00:09:41.644 clat (usec): min=174, max=654, avg=289.37, stdev=55.05 00:09:41.644 lat (usec): min=199, max=674, avg=309.94, stdev=55.50 00:09:41.644 clat percentiles (usec): 00:09:41.644 | 1.00th=[ 198], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 249], 00:09:41.644 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 289], 00:09:41.644 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 359], 95.00th=[ 404], 00:09:41.644 | 99.00th=[ 486], 99.50th=[ 523], 99.90th=[ 570], 99.95th=[ 652], 00:09:41.644 | 99.99th=[ 652] 00:09:41.644 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:41.644 slat (usec): min=18, max=130, avg=29.62, stdev= 6.48 00:09:41.644 clat (usec): min=100, max=2876, avg=200.89, stdev=80.40 00:09:41.645 lat (usec): min=127, max=2914, avg=230.51, stdev=81.77 00:09:41.645 clat percentiles (usec): 00:09:41.645 | 1.00th=[ 111], 5.00th=[ 121], 10.00th=[ 133], 20.00th=[ 161], 00:09:41.645 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 204], 00:09:41.645 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 285], 95.00th=[ 310], 00:09:41.645 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 676], 99.95th=[ 791], 00:09:41.645 | 99.99th=[ 2868] 00:09:41.645 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:09:41.645 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:41.645 lat (usec) : 250=58.00%, 500=41.57%, 750=0.38%, 1000=0.03% 00:09:41.645 lat (msec) : 4=0.03% 00:09:41.645 cpu : usr=2.40%, sys=6.90%, ctx=3724, majf=0, minf=5 00:09:41.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.645 issued rwts: total=1676,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.645 00:09:41.645 Run status group 0 (all jobs): 00:09:41.645 READ: bw=27.2MiB/s (28.6MB/s), 6689KiB/s-7608KiB/s (6850kB/s-7791kB/s), io=27.3MiB (28.6MB), run=1001-1001msec 00:09:41.645 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:09:41.645 00:09:41.645 Disk stats (read/write): 00:09:41.645 nvme0n1: ios=1578/1536, merge=0/0, ticks=514/282, in_queue=796, util=87.30% 00:09:41.645 nvme0n2: ios=1541/1625, merge=0/0, ticks=482/334, in_queue=816, util=87.16% 00:09:41.645 nvme0n3: ios=1536/1930, merge=0/0, ticks=476/319, in_queue=795, util=89.25% 00:09:41.645 nvme0n4: ios=1536/1547, merge=0/0, ticks=463/340, in_queue=803, util=89.81% 00:09:41.645 11:33:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:41.645 [global] 00:09:41.645 thread=1 00:09:41.645 invalidate=1 00:09:41.645 rw=randwrite 00:09:41.645 time_based=1 00:09:41.645 runtime=1 00:09:41.645 ioengine=libaio 00:09:41.645 direct=1 00:09:41.645 bs=4096 00:09:41.645 iodepth=1 00:09:41.645 norandommap=0 00:09:41.645 numjobs=1 00:09:41.645 00:09:41.645 verify_dump=1 00:09:41.645 verify_backlog=512 00:09:41.645 verify_state_save=0 00:09:41.645 do_verify=1 00:09:41.645 verify=crc32c-intel 00:09:41.645 [job0] 00:09:41.645 filename=/dev/nvme0n1 00:09:41.645 [job1] 00:09:41.645 filename=/dev/nvme0n2 00:09:41.645 [job2] 00:09:41.645 filename=/dev/nvme0n3 00:09:41.645 [job3] 00:09:41.645 filename=/dev/nvme0n4 00:09:41.645 Could not set queue depth (nvme0n1) 00:09:41.645 Could not set queue depth (nvme0n2) 00:09:41.645 Could not set queue depth (nvme0n3) 00:09:41.645 Could not set queue depth (nvme0n4) 00:09:41.903 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.903 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.903 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.903 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.903 fio-3.35 00:09:41.903 Starting 4 threads 00:09:43.278 00:09:43.278 job0: (groupid=0, jobs=1): err= 0: pid=68630: Fri Jul 12 11:33:46 2024 00:09:43.278 read: IOPS=3155, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec) 00:09:43.278 slat (nsec): min=10946, max=31116, avg=12370.22, stdev=1445.46 00:09:43.278 clat (usec): min=129, max=398, avg=154.81, stdev=15.29 00:09:43.278 lat (usec): min=141, max=412, avg=167.18, stdev=15.44 00:09:43.278 clat percentiles (usec): 00:09:43.278 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:09:43.278 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:09:43.278 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 174], 00:09:43.278 | 99.00th=[ 192], 99.50th=[ 253], 99.90th=[ 355], 99.95th=[ 396], 00:09:43.278 | 99.99th=[ 400] 00:09:43.278 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:43.278 slat (nsec): min=13219, max=85254, avg=19175.08, stdev=3378.58 00:09:43.278 clat (usec): min=86, max=292, avg=109.37, stdev=13.63 00:09:43.278 lat (usec): min=104, max=321, avg=128.55, stdev=14.87 00:09:43.278 clat percentiles (usec): 00:09:43.278 | 1.00th=[ 91], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 99], 00:09:43.278 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 111], 00:09:43.278 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 133], 00:09:43.278 | 99.00th=[ 145], 99.50th=[ 151], 99.90th=[ 245], 99.95th=[ 281], 00:09:43.278 | 99.99th=[ 293] 00:09:43.278 bw ( KiB/s): min=14408, max=14408, per=38.70%, avg=14408.00, stdev= 0.00, samples=1 00:09:43.278 iops : min= 3602, max= 3602, avg=3602.00, stdev= 0.00, samples=1 00:09:43.278 lat (usec) : 100=12.31%, 250=87.39%, 500=0.30% 00:09:43.278 cpu : usr=2.20%, sys=8.60%, ctx=6745, majf=0, minf=17 00:09:43.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.278 issued rwts: total=3159,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.278 job1: (groupid=0, jobs=1): err= 0: pid=68631: Fri Jul 12 11:33:46 2024 00:09:43.278 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:43.278 slat (usec): min=8, max=184, avg=12.86, stdev= 6.09 00:09:43.278 clat (usec): min=229, max=1675, avg=321.49, stdev=55.63 00:09:43.278 lat (usec): min=243, max=1684, avg=334.35, stdev=56.28 00:09:43.278 clat percentiles (usec): 00:09:43.278 | 1.00th=[ 245], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:09:43.278 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:09:43.278 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 396], 00:09:43.278 | 99.00th=[ 469], 99.50th=[ 494], 99.90th=[ 807], 99.95th=[ 1680], 00:09:43.278 | 99.99th=[ 1680] 00:09:43.278 write: IOPS=1851, BW=7405KiB/s (7582kB/s)(7412KiB/1001msec); 0 zone resets 00:09:43.278 slat (usec): min=8, max=216, avg=21.28, stdev=18.07 00:09:43.278 clat (usec): min=118, max=461, avg=238.52, stdev=45.95 00:09:43.278 lat (usec): min=145, max=501, avg=259.81, stdev=50.55 00:09:43.278 clat percentiles (usec): 00:09:43.278 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 198], 00:09:43.278 | 30.00th=[ 212], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 245], 00:09:43.278 | 70.00th=[ 258], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 322], 00:09:43.278 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 449], 99.95th=[ 461], 00:09:43.278 | 99.99th=[ 461] 00:09:43.278 bw ( KiB/s): min= 8192, max= 8192, per=22.00%, avg=8192.00, stdev= 0.00, samples=1 00:09:43.278 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:43.278 lat (usec) : 250=36.18%, 500=63.62%, 750=0.12%, 1000=0.06% 00:09:43.278 lat (msec) : 2=0.03% 00:09:43.278 cpu : usr=1.20%, sys=4.80%, ctx=3600, majf=0, minf=15 00:09:43.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.278 issued rwts: total=1536,1853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.278 job2: (groupid=0, jobs=1): err= 0: pid=68632: Fri Jul 12 11:33:46 2024 00:09:43.278 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:43.278 slat (usec): min=7, max=140, avg=13.47, stdev=11.03 00:09:43.278 clat (usec): min=157, max=1890, avg=318.01, stdev=89.61 00:09:43.278 lat (usec): min=174, max=1910, avg=331.48, stdev=90.65 00:09:43.279 clat percentiles (usec): 00:09:43.279 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 196], 20.00th=[ 285], 00:09:43.279 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:09:43.279 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 437], 95.00th=[ 453], 00:09:43.279 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 1287], 99.95th=[ 1893], 00:09:43.279 | 99.99th=[ 1893] 00:09:43.279 write: IOPS=2025, BW=8104KiB/s (8298kB/s)(8112KiB/1001msec); 0 zone resets 00:09:43.279 slat (usec): min=11, max=1443, avg=22.17, stdev=33.07 00:09:43.279 clat (nsec): min=1506, max=7125.8k, avg=216877.35, stdev=174261.53 00:09:43.279 lat (usec): min=124, max=7154, avg=239.05, stdev=176.41 00:09:43.279 clat percentiles (usec): 00:09:43.279 | 1.00th=[ 113], 5.00th=[ 122], 10.00th=[ 127], 20.00th=[ 137], 00:09:43.279 | 30.00th=[ 149], 40.00th=[ 198], 50.00th=[ 225], 60.00th=[ 241], 00:09:43.279 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 318], 00:09:43.279 | 99.00th=[ 359], 99.50th=[ 400], 99.90th=[ 1565], 99.95th=[ 1795], 00:09:43.279 | 99.99th=[ 7111] 00:09:43.279 bw ( KiB/s): min= 8192, max= 8192, per=22.00%, avg=8192.00, stdev= 0.00, samples=1 00:09:43.279 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:43.279 lat (usec) : 2=0.06%, 4=0.03%, 100=0.06%, 250=42.54%, 500=56.20% 00:09:43.279 lat (usec) : 750=0.95%, 1000=0.03% 00:09:43.279 lat (msec) : 2=0.11%, 10=0.03% 00:09:43.279 cpu : usr=1.50%, sys=5.00%, ctx=3651, majf=0, minf=5 00:09:43.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.279 issued rwts: total=1536,2028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.279 job3: (groupid=0, jobs=1): err= 0: pid=68633: Fri Jul 12 11:33:46 2024 00:09:43.279 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:43.279 slat (usec): min=7, max=641, avg=13.93, stdev=17.13 00:09:43.279 clat (usec): min=2, max=1660, avg=320.40, stdev=55.71 00:09:43.279 lat (usec): min=242, max=1674, avg=334.33, stdev=55.76 00:09:43.279 clat percentiles (usec): 00:09:43.279 | 1.00th=[ 247], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 293], 00:09:43.279 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:09:43.279 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 379], 95.00th=[ 400], 00:09:43.279 | 99.00th=[ 465], 99.50th=[ 502], 99.90th=[ 881], 99.95th=[ 1663], 00:09:43.279 | 99.99th=[ 1663] 00:09:43.279 write: IOPS=1850, BW=7401KiB/s (7578kB/s)(7408KiB/1001msec); 0 zone resets 00:09:43.279 slat (usec): min=7, max=185, avg=20.96, stdev=14.57 00:09:43.279 clat (usec): min=119, max=510, avg=238.84, stdev=49.48 00:09:43.279 lat (usec): min=142, max=524, avg=259.80, stdev=51.20 00:09:43.279 clat percentiles (usec): 00:09:43.279 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 194], 00:09:43.279 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 245], 00:09:43.279 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 326], 00:09:43.279 | 99.00th=[ 392], 99.50th=[ 429], 99.90th=[ 453], 99.95th=[ 510], 00:09:43.279 | 99.99th=[ 510] 00:09:43.279 bw ( KiB/s): min= 8192, max= 8192, per=22.00%, avg=8192.00, stdev= 0.00, samples=1 00:09:43.279 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:43.279 lat (usec) : 4=0.03%, 250=35.09%, 500=64.61%, 750=0.21%, 1000=0.03% 00:09:43.279 lat (msec) : 2=0.03% 00:09:43.279 cpu : usr=1.50%, sys=4.70%, ctx=3625, majf=0, minf=8 00:09:43.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.279 issued rwts: total=1536,1852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.279 00:09:43.279 Run status group 0 (all jobs): 00:09:43.279 READ: bw=30.3MiB/s (31.8MB/s), 6138KiB/s-12.3MiB/s (6285kB/s-12.9MB/s), io=30.3MiB (31.8MB), run=1001-1001msec 00:09:43.279 WRITE: bw=36.4MiB/s (38.1MB/s), 7401KiB/s-14.0MiB/s (7578kB/s-14.7MB/s), io=36.4MiB (38.2MB), run=1001-1001msec 00:09:43.279 00:09:43.279 Disk stats (read/write): 00:09:43.279 nvme0n1: ios=2785/3072, merge=0/0, ticks=463/365, in_queue=828, util=88.68% 00:09:43.279 nvme0n2: ios=1444/1536, merge=0/0, ticks=444/327, in_queue=771, util=88.38% 00:09:43.279 nvme0n3: ios=1553/1552, merge=0/0, ticks=482/299, in_queue=781, util=88.45% 00:09:43.279 nvme0n4: ios=1396/1536, merge=0/0, ticks=438/333, in_queue=771, util=89.72% 00:09:43.279 11:33:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:43.279 [global] 00:09:43.279 thread=1 00:09:43.279 invalidate=1 00:09:43.279 rw=write 00:09:43.279 time_based=1 00:09:43.279 runtime=1 00:09:43.279 ioengine=libaio 00:09:43.279 direct=1 00:09:43.279 bs=4096 00:09:43.279 iodepth=128 00:09:43.279 norandommap=0 00:09:43.279 numjobs=1 00:09:43.279 00:09:43.279 verify_dump=1 00:09:43.279 verify_backlog=512 00:09:43.279 verify_state_save=0 00:09:43.279 do_verify=1 00:09:43.279 verify=crc32c-intel 00:09:43.279 [job0] 00:09:43.279 filename=/dev/nvme0n1 00:09:43.279 [job1] 00:09:43.279 filename=/dev/nvme0n2 00:09:43.279 [job2] 00:09:43.279 filename=/dev/nvme0n3 00:09:43.279 [job3] 00:09:43.279 filename=/dev/nvme0n4 00:09:43.279 Could not set queue depth (nvme0n1) 00:09:43.279 Could not set queue depth (nvme0n2) 00:09:43.279 Could not set queue depth (nvme0n3) 00:09:43.279 Could not set queue depth (nvme0n4) 00:09:43.279 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.279 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.279 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.279 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:43.279 fio-3.35 00:09:43.279 Starting 4 threads 00:09:44.654 00:09:44.654 job0: (groupid=0, jobs=1): err= 0: pid=68692: Fri Jul 12 11:33:47 2024 00:09:44.654 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:09:44.654 slat (usec): min=7, max=2734, avg=82.19, stdev=367.79 00:09:44.654 clat (usec): min=8374, max=13145, avg=11112.72, stdev=503.39 00:09:44.654 lat (usec): min=9461, max=13156, avg=11194.90, stdev=352.43 00:09:44.654 clat percentiles (usec): 00:09:44.654 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[10683], 20.00th=[10814], 00:09:44.654 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:09:44.654 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:09:44.654 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12387], 99.95th=[13173], 00:09:44.654 | 99.99th=[13173] 00:09:44.654 write: IOPS=6122, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:09:44.654 slat (usec): min=10, max=2503, avg=79.96, stdev=320.68 00:09:44.654 clat (usec): min=1299, max=12578, avg=10433.25, stdev=861.05 00:09:44.654 lat (usec): min=1320, max=12597, avg=10513.21, stdev=802.50 00:09:44.654 clat percentiles (usec): 00:09:44.654 | 1.00th=[ 6587], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10290], 00:09:44.654 | 30.00th=[10421], 40.00th=[10421], 50.00th=[10552], 60.00th=[10552], 00:09:44.654 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11076], 00:09:44.654 | 99.00th=[11469], 99.50th=[11469], 99.90th=[12256], 99.95th=[12518], 00:09:44.654 | 99.99th=[12518] 00:09:44.654 bw ( KiB/s): min=23488, max=24576, per=34.27%, avg=24032.00, stdev=769.33, samples=2 00:09:44.654 iops : min= 5872, max= 6144, avg=6008.00, stdev=192.33, samples=2 00:09:44.654 lat (msec) : 2=0.16%, 4=0.06%, 10=4.74%, 20=95.04% 00:09:44.654 cpu : usr=6.59%, sys=14.99%, ctx=394, majf=0, minf=7 00:09:44.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:44.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.654 issued rwts: total=5632,6135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.654 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.654 job1: (groupid=0, jobs=1): err= 0: pid=68693: Fri Jul 12 11:33:47 2024 00:09:44.654 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:09:44.654 slat (usec): min=6, max=7651, avg=184.01, stdev=684.63 00:09:44.654 clat (usec): min=16900, max=31789, avg=23165.98, stdev=2232.89 00:09:44.654 lat (usec): min=17014, max=33201, avg=23349.98, stdev=2229.47 00:09:44.654 clat percentiles (usec): 00:09:44.654 | 1.00th=[17695], 5.00th=[19792], 10.00th=[20579], 20.00th=[21627], 00:09:44.654 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:09:44.654 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25822], 95.00th=[27657], 00:09:44.654 | 99.00th=[30278], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:09:44.654 | 99.99th=[31851] 00:09:44.654 write: IOPS=2982, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1005msec); 0 zone resets 00:09:44.654 slat (usec): min=6, max=7404, avg=169.18, stdev=679.89 00:09:44.654 clat (usec): min=4468, max=36270, avg=22348.13, stdev=4338.89 00:09:44.654 lat (usec): min=5300, max=36318, avg=22517.31, stdev=4348.00 00:09:44.654 clat percentiles (usec): 00:09:44.654 | 1.00th=[ 9110], 5.00th=[15926], 10.00th=[17433], 20.00th=[19530], 00:09:44.654 | 30.00th=[21103], 40.00th=[21890], 50.00th=[22152], 60.00th=[22938], 00:09:44.654 | 70.00th=[23725], 80.00th=[24511], 90.00th=[26608], 95.00th=[30278], 00:09:44.654 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:09:44.654 | 99.99th=[36439] 00:09:44.655 bw ( KiB/s): min=10664, max=12312, per=16.38%, avg=11488.00, stdev=1165.31, samples=2 00:09:44.655 iops : min= 2666, max= 3078, avg=2872.00, stdev=291.33, samples=2 00:09:44.655 lat (msec) : 10=0.56%, 20=15.01%, 50=84.43% 00:09:44.655 cpu : usr=2.29%, sys=8.57%, ctx=756, majf=0, minf=11 00:09:44.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:44.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.655 issued rwts: total=2560,2997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.655 job2: (groupid=0, jobs=1): err= 0: pid=68695: Fri Jul 12 11:33:47 2024 00:09:44.655 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:09:44.655 slat (usec): min=4, max=4891, avg=93.20, stdev=410.53 00:09:44.655 clat (usec): min=8506, max=17178, avg=12423.26, stdev=875.93 00:09:44.655 lat (usec): min=9420, max=17217, avg=12516.46, stdev=888.17 00:09:44.655 clat percentiles (usec): 00:09:44.655 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11338], 20.00th=[12125], 00:09:44.655 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12518], 00:09:44.655 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[14091], 00:09:44.655 | 99.00th=[15533], 99.50th=[15664], 99.90th=[16057], 99.95th=[16712], 00:09:44.655 | 99.99th=[17171] 00:09:44.655 write: IOPS=5437, BW=21.2MiB/s (22.3MB/s)(21.3MiB/1002msec); 0 zone resets 00:09:44.655 slat (usec): min=11, max=4948, avg=88.29, stdev=504.37 00:09:44.655 clat (usec): min=1861, max=17309, avg=11589.93, stdev=1315.20 00:09:44.655 lat (usec): min=1880, max=17357, avg=11678.21, stdev=1394.41 00:09:44.655 clat percentiles (usec): 00:09:44.655 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11207], 00:09:44.655 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:09:44.655 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[13304], 00:09:44.655 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16909], 99.95th=[17171], 00:09:44.655 | 99.99th=[17433] 00:09:44.655 bw ( KiB/s): min=20728, max=21840, per=30.35%, avg=21284.00, stdev=786.30, samples=2 00:09:44.655 iops : min= 5182, max= 5460, avg=5321.00, stdev=196.58, samples=2 00:09:44.655 lat (msec) : 2=0.06%, 4=0.17%, 10=3.10%, 20=96.67% 00:09:44.655 cpu : usr=4.70%, sys=14.69%, ctx=321, majf=0, minf=10 00:09:44.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:44.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.655 issued rwts: total=5120,5448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.655 job3: (groupid=0, jobs=1): err= 0: pid=68696: Fri Jul 12 11:33:47 2024 00:09:44.655 read: IOPS=2575, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1007msec) 00:09:44.655 slat (usec): min=6, max=9659, avg=185.49, stdev=693.00 00:09:44.655 clat (usec): min=6250, max=34823, avg=23553.87, stdev=3393.72 00:09:44.655 lat (usec): min=6939, max=34841, avg=23739.36, stdev=3397.77 00:09:44.655 clat percentiles (usec): 00:09:44.655 | 1.00th=[10290], 5.00th=[18220], 10.00th=[19792], 20.00th=[21890], 00:09:44.655 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:09:44.655 | 70.00th=[24249], 80.00th=[26084], 90.00th=[27919], 95.00th=[29230], 00:09:44.655 | 99.00th=[32637], 99.50th=[33162], 99.90th=[34866], 99.95th=[34866], 00:09:44.655 | 99.99th=[34866] 00:09:44.655 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:09:44.655 slat (usec): min=8, max=6668, avg=161.69, stdev=665.51 00:09:44.655 clat (usec): min=11828, max=32100, avg=21409.70, stdev=3290.29 00:09:44.655 lat (usec): min=11858, max=32114, avg=21571.39, stdev=3299.50 00:09:44.655 clat percentiles (usec): 00:09:44.655 | 1.00th=[14222], 5.00th=[15926], 10.00th=[16909], 20.00th=[17957], 00:09:44.655 | 30.00th=[19268], 40.00th=[21103], 50.00th=[21890], 60.00th=[22676], 00:09:44.655 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25297], 95.00th=[26870], 00:09:44.655 | 99.00th=[28705], 99.50th=[28705], 99.90th=[32113], 99.95th=[32113], 00:09:44.655 | 99.99th=[32113] 00:09:44.655 bw ( KiB/s): min=11544, max=12263, per=16.98%, avg=11903.50, stdev=508.41, samples=2 00:09:44.655 iops : min= 2886, max= 3065, avg=2975.50, stdev=126.57, samples=2 00:09:44.655 lat (msec) : 10=0.35%, 20=22.98%, 50=76.67% 00:09:44.655 cpu : usr=3.08%, sys=7.95%, ctx=769, majf=0, minf=7 00:09:44.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:44.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.655 issued rwts: total=2594,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.655 00:09:44.655 Run status group 0 (all jobs): 00:09:44.655 READ: bw=61.7MiB/s (64.7MB/s), 9.95MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=62.1MiB (65.1MB), run=1002-1007msec 00:09:44.655 WRITE: bw=68.5MiB/s (71.8MB/s), 11.6MiB/s-23.9MiB/s (12.2MB/s-25.1MB/s), io=69.0MiB (72.3MB), run=1002-1007msec 00:09:44.655 00:09:44.655 Disk stats (read/write): 00:09:44.655 nvme0n1: ios=5170/5124, merge=0/0, ticks=12354/11032, in_queue=23386, util=89.08% 00:09:44.655 nvme0n2: ios=2286/2560, merge=0/0, ticks=16450/17132, in_queue=33582, util=89.51% 00:09:44.655 nvme0n3: ios=4561/4608, merge=0/0, ticks=27048/22391, in_queue=49439, util=90.08% 00:09:44.655 nvme0n4: ios=2388/2560, merge=0/0, ticks=17976/15746, in_queue=33722, util=90.75% 00:09:44.655 11:33:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:44.655 [global] 00:09:44.655 thread=1 00:09:44.655 invalidate=1 00:09:44.655 rw=randwrite 00:09:44.655 time_based=1 00:09:44.655 runtime=1 00:09:44.655 ioengine=libaio 00:09:44.655 direct=1 00:09:44.655 bs=4096 00:09:44.655 iodepth=128 00:09:44.655 norandommap=0 00:09:44.655 numjobs=1 00:09:44.655 00:09:44.655 verify_dump=1 00:09:44.655 verify_backlog=512 00:09:44.655 verify_state_save=0 00:09:44.655 do_verify=1 00:09:44.655 verify=crc32c-intel 00:09:44.655 [job0] 00:09:44.655 filename=/dev/nvme0n1 00:09:44.655 [job1] 00:09:44.655 filename=/dev/nvme0n2 00:09:44.655 [job2] 00:09:44.655 filename=/dev/nvme0n3 00:09:44.655 [job3] 00:09:44.655 filename=/dev/nvme0n4 00:09:44.655 Could not set queue depth (nvme0n1) 00:09:44.655 Could not set queue depth (nvme0n2) 00:09:44.655 Could not set queue depth (nvme0n3) 00:09:44.655 Could not set queue depth (nvme0n4) 00:09:44.655 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.655 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.655 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.655 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:44.655 fio-3.35 00:09:44.655 Starting 4 threads 00:09:46.028 00:09:46.028 job0: (groupid=0, jobs=1): err= 0: pid=68755: Fri Jul 12 11:33:49 2024 00:09:46.028 read: IOPS=5355, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1002msec) 00:09:46.028 slat (usec): min=5, max=4733, avg=89.87, stdev=390.84 00:09:46.028 clat (usec): min=1468, max=16984, avg=11893.47, stdev=1391.73 00:09:46.028 lat (usec): min=1486, max=17102, avg=11983.35, stdev=1402.82 00:09:46.028 clat percentiles (usec): 00:09:46.028 | 1.00th=[ 5342], 5.00th=[10028], 10.00th=[10552], 20.00th=[11076], 00:09:46.028 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12256], 00:09:46.028 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13042], 95.00th=[13566], 00:09:46.028 | 99.00th=[15139], 99.50th=[15401], 99.90th=[15664], 99.95th=[16319], 00:09:46.028 | 99.99th=[16909] 00:09:46.028 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:46.028 slat (usec): min=10, max=4543, avg=84.07, stdev=460.53 00:09:46.028 clat (usec): min=6414, max=16297, avg=11169.46, stdev=1073.30 00:09:46.028 lat (usec): min=6451, max=16338, avg=11253.53, stdev=1157.67 00:09:46.028 clat percentiles (usec): 00:09:46.028 | 1.00th=[ 7832], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10421], 00:09:46.028 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:09:46.028 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[12649], 00:09:46.028 | 99.00th=[14746], 99.50th=[15270], 99.90th=[15926], 99.95th=[16057], 00:09:46.028 | 99.99th=[16319] 00:09:46.028 bw ( KiB/s): min=21226, max=23872, per=29.11%, avg=22549.00, stdev=1871.00, samples=2 00:09:46.028 iops : min= 5306, max= 5968, avg=5637.00, stdev=468.10, samples=2 00:09:46.028 lat (msec) : 2=0.10%, 4=0.20%, 10=6.90%, 20=92.80% 00:09:46.028 cpu : usr=4.80%, sys=15.98%, ctx=338, majf=0, minf=3 00:09:46.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:46.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.028 issued rwts: total=5366,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.028 job1: (groupid=0, jobs=1): err= 0: pid=68757: Fri Jul 12 11:33:49 2024 00:09:46.028 read: IOPS=4592, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:09:46.028 slat (usec): min=7, max=8261, avg=112.38, stdev=472.83 00:09:46.028 clat (usec): min=912, max=34183, avg=14358.91, stdev=5101.33 00:09:46.028 lat (usec): min=1915, max=34216, avg=14471.29, stdev=5143.63 00:09:46.028 clat percentiles (usec): 00:09:46.028 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11207], 20.00th=[11731], 00:09:46.028 | 30.00th=[11863], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:09:46.028 | 70.00th=[12911], 80.00th=[17695], 90.00th=[23200], 95.00th=[25297], 00:09:46.028 | 99.00th=[30540], 99.50th=[31589], 99.90th=[33424], 99.95th=[33424], 00:09:46.028 | 99.99th=[34341] 00:09:46.028 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:09:46.028 slat (usec): min=10, max=4019, avg=96.93, stdev=400.72 00:09:46.028 clat (usec): min=8827, max=37395, avg=13147.82, stdev=4053.76 00:09:46.028 lat (usec): min=8849, max=37425, avg=13244.75, stdev=4090.98 00:09:46.028 clat percentiles (usec): 00:09:46.028 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[10945], 20.00th=[11338], 00:09:46.028 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11863], 00:09:46.028 | 70.00th=[12125], 80.00th=[12911], 90.00th=[19006], 95.00th=[23462], 00:09:46.028 | 99.00th=[29230], 99.50th=[30016], 99.90th=[34341], 99.95th=[34341], 00:09:46.028 | 99.99th=[37487] 00:09:46.028 bw ( KiB/s): min=15184, max=21680, per=23.79%, avg=18432.00, stdev=4593.37, samples=2 00:09:46.028 iops : min= 3796, max= 5420, avg=4608.00, stdev=1148.34, samples=2 00:09:46.028 lat (usec) : 1000=0.01% 00:09:46.028 lat (msec) : 2=0.04%, 4=0.02%, 10=2.62%, 20=83.45%, 50=13.85% 00:09:46.028 cpu : usr=3.90%, sys=13.09%, ctx=611, majf=0, minf=3 00:09:46.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:46.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.028 issued rwts: total=4602,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.028 job2: (groupid=0, jobs=1): err= 0: pid=68758: Fri Jul 12 11:33:49 2024 00:09:46.028 read: IOPS=3917, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1002msec) 00:09:46.028 slat (usec): min=7, max=8334, avg=126.29, stdev=568.60 00:09:46.028 clat (usec): min=1337, max=29474, avg=16169.24, stdev=4025.91 00:09:46.028 lat (usec): min=2461, max=29491, avg=16295.53, stdev=4028.70 00:09:46.028 clat percentiles (usec): 00:09:46.028 | 1.00th=[ 8717], 5.00th=[13698], 10.00th=[13829], 20.00th=[13960], 00:09:46.028 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14222], 60.00th=[14484], 00:09:46.028 | 70.00th=[14615], 80.00th=[20841], 90.00th=[22938], 95.00th=[24511], 00:09:46.028 | 99.00th=[26870], 99.50th=[27395], 99.90th=[28443], 99.95th=[28443], 00:09:46.028 | 99.99th=[29492] 00:09:46.028 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:09:46.028 slat (usec): min=10, max=4981, avg=115.21, stdev=467.70 00:09:46.028 clat (usec): min=10249, max=27726, avg=15380.45, stdev=3772.71 00:09:46.028 lat (usec): min=11025, max=27745, avg=15495.67, stdev=3774.65 00:09:46.028 clat percentiles (usec): 00:09:46.028 | 1.00th=[10945], 5.00th=[13173], 10.00th=[13304], 20.00th=[13435], 00:09:46.028 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:09:46.028 | 70.00th=[13960], 80.00th=[15533], 90.00th=[23200], 95.00th=[23987], 00:09:46.028 | 99.00th=[25822], 99.50th=[26608], 99.90th=[27132], 99.95th=[27657], 00:09:46.028 | 99.99th=[27657] 00:09:46.028 bw ( KiB/s): min=13304, max=19503, per=21.17%, avg=16403.50, stdev=4383.35, samples=2 00:09:46.028 iops : min= 3326, max= 4875, avg=4100.50, stdev=1095.31, samples=2 00:09:46.028 lat (msec) : 2=0.01%, 4=0.01%, 10=0.52%, 20=80.38%, 50=19.07% 00:09:46.028 cpu : usr=2.40%, sys=12.79%, ctx=474, majf=0, minf=6 00:09:46.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:46.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.028 issued rwts: total=3925,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.028 job3: (groupid=0, jobs=1): err= 0: pid=68759: Fri Jul 12 11:33:49 2024 00:09:46.028 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:09:46.028 slat (usec): min=5, max=4945, avg=100.68, stdev=403.84 00:09:46.028 clat (usec): min=9039, max=18711, avg=13427.79, stdev=1251.74 00:09:46.028 lat (usec): min=9073, max=18728, avg=13528.47, stdev=1292.84 00:09:46.028 clat percentiles (usec): 00:09:46.028 | 1.00th=[10028], 5.00th=[11600], 10.00th=[11994], 20.00th=[12256], 00:09:46.028 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:09:46.028 | 70.00th=[13829], 80.00th=[14091], 90.00th=[15008], 95.00th=[15795], 00:09:46.028 | 99.00th=[16581], 99.50th=[16712], 99.90th=[18482], 99.95th=[18482], 00:09:46.028 | 99.99th=[18744] 00:09:46.028 write: IOPS=5058, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1002msec); 0 zone resets 00:09:46.028 slat (usec): min=10, max=3975, avg=98.08, stdev=459.43 00:09:46.028 clat (usec): min=244, max=17645, avg=12760.42, stdev=1491.45 00:09:46.028 lat (usec): min=3012, max=17693, avg=12858.49, stdev=1550.53 00:09:46.028 clat percentiles (usec): 00:09:46.028 | 1.00th=[ 6718], 5.00th=[10945], 10.00th=[11338], 20.00th=[11994], 00:09:46.028 | 30.00th=[12518], 40.00th=[12649], 50.00th=[13042], 60.00th=[13173], 00:09:46.028 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13829], 95.00th=[14877], 00:09:46.028 | 99.00th=[16319], 99.50th=[16581], 99.90th=[16909], 99.95th=[17171], 00:09:46.028 | 99.99th=[17695] 00:09:46.028 bw ( KiB/s): min=19496, max=20072, per=25.54%, avg=19784.00, stdev=407.29, samples=2 00:09:46.028 iops : min= 4874, max= 5018, avg=4946.00, stdev=101.82, samples=2 00:09:46.028 lat (usec) : 250=0.01% 00:09:46.028 lat (msec) : 4=0.43%, 10=1.23%, 20=98.33% 00:09:46.028 cpu : usr=4.10%, sys=14.29%, ctx=393, majf=0, minf=5 00:09:46.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:46.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.028 issued rwts: total=4608,5069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.028 00:09:46.028 Run status group 0 (all jobs): 00:09:46.028 READ: bw=72.1MiB/s (75.6MB/s), 15.3MiB/s-20.9MiB/s (16.0MB/s-21.9MB/s), io=72.3MiB (75.8MB), run=1002-1002msec 00:09:46.028 WRITE: bw=75.6MiB/s (79.3MB/s), 16.0MiB/s-22.0MiB/s (16.7MB/s-23.0MB/s), io=75.8MiB (79.5MB), run=1002-1002msec 00:09:46.028 00:09:46.028 Disk stats (read/write): 00:09:46.028 nvme0n1: ios=4658/4668, merge=0/0, ticks=26870/21263, in_queue=48133, util=87.66% 00:09:46.028 nvme0n2: ios=4138/4377, merge=0/0, ticks=17418/15427, in_queue=32845, util=89.24% 00:09:46.028 nvme0n3: ios=3611/3636, merge=0/0, ticks=13729/11823, in_queue=25552, util=89.13% 00:09:46.028 nvme0n4: ios=4096/4127, merge=0/0, ticks=17602/14853, in_queue=32455, util=89.59% 00:09:46.028 11:33:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:46.028 11:33:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68772 00:09:46.028 11:33:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:46.028 11:33:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:46.028 [global] 00:09:46.028 thread=1 00:09:46.028 invalidate=1 00:09:46.028 rw=read 00:09:46.028 time_based=1 00:09:46.028 runtime=10 00:09:46.028 ioengine=libaio 00:09:46.028 direct=1 00:09:46.028 bs=4096 00:09:46.028 iodepth=1 00:09:46.028 norandommap=1 00:09:46.028 numjobs=1 00:09:46.028 00:09:46.028 [job0] 00:09:46.028 filename=/dev/nvme0n1 00:09:46.028 [job1] 00:09:46.028 filename=/dev/nvme0n2 00:09:46.028 [job2] 00:09:46.028 filename=/dev/nvme0n3 00:09:46.028 [job3] 00:09:46.028 filename=/dev/nvme0n4 00:09:46.028 Could not set queue depth (nvme0n1) 00:09:46.028 Could not set queue depth (nvme0n2) 00:09:46.028 Could not set queue depth (nvme0n3) 00:09:46.028 Could not set queue depth (nvme0n4) 00:09:46.028 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.028 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.028 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.028 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.028 fio-3.35 00:09:46.028 Starting 4 threads 00:09:49.304 11:33:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:49.304 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=43446272, buflen=4096 00:09:49.304 fio: pid=68815, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:49.304 11:33:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:49.304 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=48259072, buflen=4096 00:09:49.304 fio: pid=68814, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:49.304 11:33:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.304 11:33:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:49.561 fio: pid=68812, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:49.561 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=54116352, buflen=4096 00:09:49.561 11:33:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.561 11:33:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:49.825 fio: pid=68813, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:49.825 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=59670528, buflen=4096 00:09:49.825 00:09:49.825 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68812: Fri Jul 12 11:33:53 2024 00:09:49.825 read: IOPS=3863, BW=15.1MiB/s (15.8MB/s)(51.6MiB/3420msec) 00:09:49.825 slat (usec): min=10, max=11981, avg=15.96, stdev=177.21 00:09:49.825 clat (usec): min=128, max=2273, avg=241.40, stdev=51.68 00:09:49.825 lat (usec): min=140, max=12258, avg=257.36, stdev=184.99 00:09:49.825 clat percentiles (usec): 00:09:49.825 | 1.00th=[ 151], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:09:49.825 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:09:49.825 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 269], 00:09:49.825 | 99.00th=[ 289], 99.50th=[ 375], 99.90th=[ 807], 99.95th=[ 1467], 00:09:49.825 | 99.99th=[ 2278] 00:09:49.825 bw ( KiB/s): min=15008, max=15752, per=28.38%, avg=15480.33, stdev=251.07, samples=6 00:09:49.825 iops : min= 3752, max= 3938, avg=3870.00, stdev=62.78, samples=6 00:09:49.825 lat (usec) : 250=71.51%, 500=28.29%, 750=0.07%, 1000=0.04% 00:09:49.825 lat (msec) : 2=0.06%, 4=0.02% 00:09:49.825 cpu : usr=0.97%, sys=4.59%, ctx=13217, majf=0, minf=1 00:09:49.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.825 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.825 issued rwts: total=13213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.825 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68813: Fri Jul 12 11:33:53 2024 00:09:49.825 read: IOPS=3960, BW=15.5MiB/s (16.2MB/s)(56.9MiB/3679msec) 00:09:49.825 slat (usec): min=11, max=17689, avg=18.75, stdev=242.92 00:09:49.825 clat (usec): min=125, max=3637, avg=232.26, stdev=64.56 00:09:49.825 lat (usec): min=138, max=17990, avg=251.01, stdev=252.15 00:09:49.825 clat percentiles (usec): 00:09:49.825 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 159], 20.00th=[ 221], 00:09:49.825 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:09:49.825 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:09:49.825 | 99.00th=[ 310], 99.50th=[ 379], 99.90th=[ 914], 99.95th=[ 1467], 00:09:49.825 | 99.99th=[ 2343] 00:09:49.825 bw ( KiB/s): min=14936, max=17309, per=28.80%, avg=15708.43, stdev=752.22, samples=7 00:09:49.825 iops : min= 3734, max= 4327, avg=3927.00, stdev=188.00, samples=7 00:09:49.825 lat (usec) : 250=76.42%, 500=23.34%, 750=0.11%, 1000=0.03% 00:09:49.825 lat (msec) : 2=0.07%, 4=0.03% 00:09:49.825 cpu : usr=1.33%, sys=4.76%, ctx=14577, majf=0, minf=1 00:09:49.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.825 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.825 issued rwts: total=14569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.825 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68814: Fri Jul 12 11:33:53 2024 00:09:49.825 read: IOPS=3711, BW=14.5MiB/s (15.2MB/s)(46.0MiB/3175msec) 00:09:49.825 slat (usec): min=10, max=15353, avg=15.32, stdev=163.33 00:09:49.825 clat (usec): min=133, max=2420, avg=252.56, stdev=44.35 00:09:49.825 lat (usec): min=145, max=15523, avg=267.87, stdev=168.38 00:09:49.825 clat percentiles (usec): 00:09:49.825 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 190], 20.00th=[ 245], 00:09:49.825 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:09:49.825 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:09:49.825 | 99.00th=[ 326], 99.50th=[ 367], 99.90th=[ 469], 99.95th=[ 701], 00:09:49.825 | 99.99th=[ 1057] 00:09:49.825 bw ( KiB/s): min=14384, max=14880, per=26.65%, avg=14538.67, stdev=186.82, samples=6 00:09:49.825 iops : min= 3596, max= 3720, avg=3634.50, stdev=46.79, samples=6 00:09:49.825 lat (usec) : 250=32.78%, 500=67.12%, 750=0.04%, 1000=0.03% 00:09:49.825 lat (msec) : 2=0.01%, 4=0.01% 00:09:49.825 cpu : usr=1.07%, sys=4.38%, ctx=11787, majf=0, minf=1 00:09:49.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.825 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.825 issued rwts: total=11783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.825 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68815: Fri Jul 12 11:33:53 2024 00:09:49.825 read: IOPS=3600, BW=14.1MiB/s (14.7MB/s)(41.4MiB/2946msec) 00:09:49.825 slat (usec): min=11, max=248, avg=15.31, stdev= 4.89 00:09:49.825 clat (usec): min=143, max=2173, avg=260.75, stdev=36.49 00:09:49.825 lat (usec): min=158, max=2187, avg=276.06, stdev=36.50 00:09:49.825 clat percentiles (usec): 00:09:49.825 | 1.00th=[ 225], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 245], 00:09:49.825 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:09:49.825 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:09:49.825 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 529], 99.95th=[ 799], 00:09:49.825 | 99.99th=[ 2114] 00:09:49.825 bw ( KiB/s): min=14392, max=14573, per=26.50%, avg=14453.80, stdev=70.60, samples=5 00:09:49.825 iops : min= 3598, max= 3643, avg=3613.40, stdev=17.54, samples=5 00:09:49.825 lat (usec) : 250=31.00%, 500=68.88%, 750=0.05%, 1000=0.03% 00:09:49.825 lat (msec) : 2=0.01%, 4=0.02% 00:09:49.825 cpu : usr=1.32%, sys=4.48%, ctx=10612, majf=0, minf=1 00:09:49.825 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.825 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.825 issued rwts: total=10608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.825 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.825 00:09:49.825 Run status group 0 (all jobs): 00:09:49.825 READ: bw=53.3MiB/s (55.9MB/s), 14.1MiB/s-15.5MiB/s (14.7MB/s-16.2MB/s), io=196MiB (205MB), run=2946-3679msec 00:09:49.825 00:09:49.825 Disk stats (read/write): 00:09:49.825 nvme0n1: ios=13050/0, merge=0/0, ticks=3164/0, in_queue=3164, util=95.54% 00:09:49.825 nvme0n2: ios=14272/0, merge=0/0, ticks=3362/0, in_queue=3362, util=95.24% 00:09:49.825 nvme0n3: ios=11526/0, merge=0/0, ticks=2973/0, in_queue=2973, util=96.19% 00:09:49.825 nvme0n4: ios=10363/0, merge=0/0, ticks=2743/0, in_queue=2743, util=96.77% 00:09:49.825 11:33:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:49.825 11:33:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:50.082 11:33:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.082 11:33:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:50.340 11:33:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.340 11:33:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:50.598 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.598 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:50.855 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:50.855 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68772 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.112 nvmf hotplug test: fio failed as expected 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:51.112 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:51.369 rmmod nvme_tcp 00:09:51.369 rmmod nvme_fabrics 00:09:51.369 rmmod nvme_keyring 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68385 ']' 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68385 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68385 ']' 00:09:51.369 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68385 00:09:51.629 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:51.629 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:51.629 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68385 00:09:51.629 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:51.629 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:51.629 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68385' 00:09:51.629 killing process with pid 68385 00:09:51.629 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68385 00:09:51.629 11:33:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68385 00:09:51.629 11:33:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:51.629 11:33:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:51.629 11:33:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:51.629 11:33:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:51.629 11:33:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:51.629 11:33:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.629 11:33:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.629 11:33:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.888 11:33:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:51.888 ************************************ 00:09:51.888 END TEST nvmf_fio_target 00:09:51.888 ************************************ 00:09:51.888 00:09:51.888 real 0m19.500s 00:09:51.888 user 1m14.655s 00:09:51.888 sys 0m9.602s 00:09:51.888 11:33:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.888 11:33:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.888 11:33:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:51.888 11:33:55 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:51.888 11:33:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:51.888 11:33:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.888 11:33:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:51.888 ************************************ 00:09:51.888 START TEST nvmf_bdevio 00:09:51.888 ************************************ 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:51.888 * Looking for test storage... 00:09:51.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:51.888 Cannot find device "nvmf_tgt_br" 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:51.888 Cannot find device "nvmf_tgt_br2" 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:51.888 Cannot find device "nvmf_tgt_br" 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:51.888 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:52.146 Cannot find device "nvmf_tgt_br2" 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:52.146 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:52.147 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:52.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:09:52.405 00:09:52.405 --- 10.0.0.2 ping statistics --- 00:09:52.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.405 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:52.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:52.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:09:52.405 00:09:52.405 --- 10.0.0.3 ping statistics --- 00:09:52.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.405 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:52.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:09:52.405 00:09:52.405 --- 10.0.0.1 ping statistics --- 00:09:52.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.405 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69078 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69078 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69078 ']' 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.405 11:33:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.405 [2024-07-12 11:33:55.745454] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:09:52.405 [2024-07-12 11:33:55.745551] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.664 [2024-07-12 11:33:55.881740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.664 [2024-07-12 11:33:55.998092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.664 [2024-07-12 11:33:55.998157] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.664 [2024-07-12 11:33:55.998191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.664 [2024-07-12 11:33:55.998203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.664 [2024-07-12 11:33:55.998213] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.664 [2024-07-12 11:33:55.998359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:52.664 [2024-07-12 11:33:55.998515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:52.664 [2024-07-12 11:33:55.998620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:52.664 [2024-07-12 11:33:55.998628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:52.664 [2024-07-12 11:33:56.051472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:53.293 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.293 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:53.293 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.293 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:53.293 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.550 [2024-07-12 11:33:56.765299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.550 Malloc0 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.550 [2024-07-12 11:33:56.840102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.550 11:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:53.551 11:33:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:53.551 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:53.551 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.551 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.551 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.551 { 00:09:53.551 "params": { 00:09:53.551 "name": "Nvme$subsystem", 00:09:53.551 "trtype": "$TEST_TRANSPORT", 00:09:53.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.551 "adrfam": "ipv4", 00:09:53.551 "trsvcid": "$NVMF_PORT", 00:09:53.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.551 "hdgst": ${hdgst:-false}, 00:09:53.551 "ddgst": ${ddgst:-false} 00:09:53.551 }, 00:09:53.551 "method": "bdev_nvme_attach_controller" 00:09:53.551 } 00:09:53.551 EOF 00:09:53.551 )") 00:09:53.551 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:53.551 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:53.551 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:53.551 11:33:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.551 "params": { 00:09:53.551 "name": "Nvme1", 00:09:53.551 "trtype": "tcp", 00:09:53.551 "traddr": "10.0.0.2", 00:09:53.551 "adrfam": "ipv4", 00:09:53.551 "trsvcid": "4420", 00:09:53.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.551 "hdgst": false, 00:09:53.551 "ddgst": false 00:09:53.551 }, 00:09:53.551 "method": "bdev_nvme_attach_controller" 00:09:53.551 }' 00:09:53.551 [2024-07-12 11:33:56.890917] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:09:53.551 [2024-07-12 11:33:56.890994] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69114 ] 00:09:53.809 [2024-07-12 11:33:57.049785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.809 [2024-07-12 11:33:57.178445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.809 [2024-07-12 11:33:57.178605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.809 [2024-07-12 11:33:57.178810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.809 [2024-07-12 11:33:57.244918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:54.067 I/O targets: 00:09:54.067 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:54.067 00:09:54.067 00:09:54.067 CUnit - A unit testing framework for C - Version 2.1-3 00:09:54.067 http://cunit.sourceforge.net/ 00:09:54.067 00:09:54.067 00:09:54.067 Suite: bdevio tests on: Nvme1n1 00:09:54.067 Test: blockdev write read block ...passed 00:09:54.067 Test: blockdev write zeroes read block ...passed 00:09:54.067 Test: blockdev write zeroes read no split ...passed 00:09:54.067 Test: blockdev write zeroes read split ...passed 00:09:54.067 Test: blockdev write zeroes read split partial ...passed 00:09:54.067 Test: blockdev reset ...[2024-07-12 11:33:57.396643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:54.067 [2024-07-12 11:33:57.396886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe667c0 (9): Bad file descriptor 00:09:54.067 [2024-07-12 11:33:57.410531] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:54.067 passed 00:09:54.067 Test: blockdev write read 8 blocks ...passed 00:09:54.067 Test: blockdev write read size > 128k ...passed 00:09:54.067 Test: blockdev write read invalid size ...passed 00:09:54.067 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:54.067 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:54.067 Test: blockdev write read max offset ...passed 00:09:54.067 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:54.067 Test: blockdev writev readv 8 blocks ...passed 00:09:54.067 Test: blockdev writev readv 30 x 1block ...passed 00:09:54.067 Test: blockdev writev readv block ...passed 00:09:54.067 Test: blockdev writev readv size > 128k ...passed 00:09:54.067 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:54.067 Test: blockdev comparev and writev ...[2024-07-12 11:33:57.420882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.067 [2024-07-12 11:33:57.420932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.420958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.067 [2024-07-12 11:33:57.420971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.421844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.067 [2024-07-12 11:33:57.421883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.421906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.067 [2024-07-12 11:33:57.421919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.422450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.067 [2024-07-12 11:33:57.422486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.422509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.067 [2024-07-12 11:33:57.422522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.423090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.067 [2024-07-12 11:33:57.423127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.423149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:54.067 [2024-07-12 11:33:57.423161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:54.067 passed 00:09:54.067 Test: blockdev nvme passthru rw ...passed 00:09:54.067 Test: blockdev nvme passthru vendor specific ...[2024-07-12 11:33:57.424664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:54.067 [2024-07-12 11:33:57.424719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.425104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:54.067 [2024-07-12 11:33:57.425140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.425516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:54.067 [2024-07-12 11:33:57.425551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:54.067 [2024-07-12 11:33:57.425867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:54.067 [2024-07-12 11:33:57.425901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:54.067 passed 00:09:54.067 Test: blockdev nvme admin passthru ...passed 00:09:54.067 Test: blockdev copy ...passed 00:09:54.067 00:09:54.067 Run Summary: Type Total Ran Passed Failed Inactive 00:09:54.067 suites 1 1 n/a 0 0 00:09:54.067 tests 23 23 23 0 0 00:09:54.067 asserts 152 152 152 0 n/a 00:09:54.067 00:09:54.067 Elapsed time = 0.144 seconds 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.326 rmmod nvme_tcp 00:09:54.326 rmmod nvme_fabrics 00:09:54.326 rmmod nvme_keyring 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69078 ']' 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69078 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69078 ']' 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69078 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69078 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69078' 00:09:54.326 killing process with pid 69078 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69078 00:09:54.326 11:33:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69078 00:09:54.585 11:33:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.585 11:33:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.585 11:33:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.585 11:33:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.585 11:33:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.585 11:33:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.585 11:33:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.585 11:33:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.844 11:33:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:54.844 00:09:54.844 real 0m2.889s 00:09:54.844 user 0m9.395s 00:09:54.844 sys 0m0.756s 00:09:54.844 11:33:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.844 ************************************ 00:09:54.844 END TEST nvmf_bdevio 00:09:54.844 ************************************ 00:09:54.844 11:33:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.844 11:33:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:54.844 11:33:58 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:54.844 11:33:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:54.844 11:33:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.844 11:33:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:54.844 ************************************ 00:09:54.844 START TEST nvmf_auth_target 00:09:54.844 ************************************ 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:54.844 * Looking for test storage... 00:09:54.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:54.844 Cannot find device "nvmf_tgt_br" 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.844 Cannot find device "nvmf_tgt_br2" 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:54.844 Cannot find device "nvmf_tgt_br" 00:09:54.844 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:09:54.845 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:54.845 Cannot find device "nvmf_tgt_br2" 00:09:54.845 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:09:54.845 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:55.102 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:55.102 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.102 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:55.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:09:55.103 00:09:55.103 --- 10.0.0.2 ping statistics --- 00:09:55.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.103 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:55.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:55.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:09:55.103 00:09:55.103 --- 10.0.0.3 ping statistics --- 00:09:55.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.103 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:55.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:55.103 00:09:55.103 --- 10.0.0.1 ping statistics --- 00:09:55.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.103 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.103 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69292 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69292 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69292 ']' 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:55.361 11:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69326 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5b69ae9c7d75c6bb9cf1b8efd012196d390a39b3c7480849 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Lwv 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5b69ae9c7d75c6bb9cf1b8efd012196d390a39b3c7480849 0 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5b69ae9c7d75c6bb9cf1b8efd012196d390a39b3c7480849 0 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5b69ae9c7d75c6bb9cf1b8efd012196d390a39b3c7480849 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:09:56.296 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Lwv 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Lwv 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Lwv 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d9905c778cbc5942e6ca7c5e8782d1b9cbdf91e07c8f6d99a65c04f8db470071 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4ZG 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d9905c778cbc5942e6ca7c5e8782d1b9cbdf91e07c8f6d99a65c04f8db470071 3 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d9905c778cbc5942e6ca7c5e8782d1b9cbdf91e07c8f6d99a65c04f8db470071 3 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d9905c778cbc5942e6ca7c5e8782d1b9cbdf91e07c8f6d99a65c04f8db470071 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4ZG 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4ZG 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.4ZG 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=38a9ffb59f6b22d4eacfb2a739d017d1 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SKR 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 38a9ffb59f6b22d4eacfb2a739d017d1 1 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 38a9ffb59f6b22d4eacfb2a739d017d1 1 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=38a9ffb59f6b22d4eacfb2a739d017d1 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SKR 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SKR 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.SKR 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=59963bd9bf4598502cb56e4c18d051a0187ea72a0b487595 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OV5 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 59963bd9bf4598502cb56e4c18d051a0187ea72a0b487595 2 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 59963bd9bf4598502cb56e4c18d051a0187ea72a0b487595 2 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=59963bd9bf4598502cb56e4c18d051a0187ea72a0b487595 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:56.555 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OV5 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OV5 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.OV5 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=49d1556f35cd272889e14464f9c0efac226cef20cf44286e 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JDX 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 49d1556f35cd272889e14464f9c0efac226cef20cf44286e 2 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 49d1556f35cd272889e14464f9c0efac226cef20cf44286e 2 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=49d1556f35cd272889e14464f9c0efac226cef20cf44286e 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:56.556 11:33:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JDX 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JDX 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.JDX 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3eefdc257150dfbe410ed47b304ed52b 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LG0 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3eefdc257150dfbe410ed47b304ed52b 1 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3eefdc257150dfbe410ed47b304ed52b 1 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3eefdc257150dfbe410ed47b304ed52b 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LG0 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LG0 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.LG0 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7007be6028c7ea1a5753bc307890d20071dca7e9af9d05962d1e8e3aaea21823 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OWN 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7007be6028c7ea1a5753bc307890d20071dca7e9af9d05962d1e8e3aaea21823 3 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7007be6028c7ea1a5753bc307890d20071dca7e9af9d05962d1e8e3aaea21823 3 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7007be6028c7ea1a5753bc307890d20071dca7e9af9d05962d1e8e3aaea21823 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OWN 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OWN 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.OWN 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:09:56.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69292 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69292 ']' 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.815 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:57.075 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.075 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:57.075 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69326 /var/tmp/host.sock 00:09:57.075 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69326 ']' 00:09:57.075 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:57.075 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.075 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:57.075 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.075 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Lwv 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Lwv 00:09:57.334 11:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Lwv 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.4ZG ]] 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4ZG 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4ZG 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4ZG 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.SKR 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.901 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.159 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.159 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.SKR 00:09:58.159 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.SKR 00:09:58.159 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.OV5 ]] 00:09:58.159 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OV5 00:09:58.159 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.159 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.159 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.160 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OV5 00:09:58.160 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OV5 00:09:58.726 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:58.726 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.JDX 00:09:58.726 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.726 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.726 11:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.726 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.JDX 00:09:58.726 11:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.JDX 00:09:58.726 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.LG0 ]] 00:09:58.726 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LG0 00:09:58.726 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.726 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.726 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.726 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LG0 00:09:58.726 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LG0 00:09:58.985 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:58.985 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.OWN 00:09:58.985 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.985 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.985 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.985 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.OWN 00:09:58.985 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.OWN 00:09:59.244 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:09:59.244 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:09:59.244 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:59.244 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:59.244 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:59.244 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.502 11:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.503 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:59.503 11:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:00.068 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:00.068 { 00:10:00.068 "cntlid": 1, 00:10:00.068 "qid": 0, 00:10:00.068 "state": "enabled", 00:10:00.068 "thread": "nvmf_tgt_poll_group_000", 00:10:00.068 "listen_address": { 00:10:00.068 "trtype": "TCP", 00:10:00.068 "adrfam": "IPv4", 00:10:00.068 "traddr": "10.0.0.2", 00:10:00.068 "trsvcid": "4420" 00:10:00.068 }, 00:10:00.068 "peer_address": { 00:10:00.068 "trtype": "TCP", 00:10:00.068 "adrfam": "IPv4", 00:10:00.068 "traddr": "10.0.0.1", 00:10:00.068 "trsvcid": "35070" 00:10:00.068 }, 00:10:00.068 "auth": { 00:10:00.068 "state": "completed", 00:10:00.068 "digest": "sha256", 00:10:00.068 "dhgroup": "null" 00:10:00.068 } 00:10:00.068 } 00:10:00.068 ]' 00:10:00.068 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:00.326 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.326 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:00.326 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:00.326 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:00.326 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.326 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.326 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.586 11:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.859 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:05.859 11:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.859 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.859 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.859 11:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.859 11:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.859 11:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.859 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:05.859 { 00:10:05.859 "cntlid": 3, 00:10:05.859 "qid": 0, 00:10:05.859 "state": "enabled", 00:10:05.859 "thread": "nvmf_tgt_poll_group_000", 00:10:05.859 "listen_address": { 00:10:05.859 "trtype": "TCP", 00:10:05.859 "adrfam": "IPv4", 00:10:05.859 "traddr": "10.0.0.2", 00:10:05.859 "trsvcid": "4420" 00:10:05.859 }, 00:10:05.859 "peer_address": { 00:10:05.859 "trtype": "TCP", 00:10:05.859 "adrfam": "IPv4", 00:10:05.859 "traddr": "10.0.0.1", 00:10:05.859 "trsvcid": "45090" 00:10:05.859 }, 00:10:05.859 "auth": { 00:10:05.859 "state": "completed", 00:10:05.859 "digest": "sha256", 00:10:05.859 "dhgroup": "null" 00:10:05.859 } 00:10:05.859 } 00:10:05.859 ]' 00:10:05.859 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:05.859 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:05.859 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:06.117 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:06.117 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:06.117 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.117 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.117 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.375 11:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:10:06.940 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.941 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:06.941 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.941 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.941 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.941 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:06.941 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:06.941 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.507 11:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.765 00:10:07.765 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:07.765 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.765 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:08.023 { 00:10:08.023 "cntlid": 5, 00:10:08.023 "qid": 0, 00:10:08.023 "state": "enabled", 00:10:08.023 "thread": "nvmf_tgt_poll_group_000", 00:10:08.023 "listen_address": { 00:10:08.023 "trtype": "TCP", 00:10:08.023 "adrfam": "IPv4", 00:10:08.023 "traddr": "10.0.0.2", 00:10:08.023 "trsvcid": "4420" 00:10:08.023 }, 00:10:08.023 "peer_address": { 00:10:08.023 "trtype": "TCP", 00:10:08.023 "adrfam": "IPv4", 00:10:08.023 "traddr": "10.0.0.1", 00:10:08.023 "trsvcid": "45116" 00:10:08.023 }, 00:10:08.023 "auth": { 00:10:08.023 "state": "completed", 00:10:08.023 "digest": "sha256", 00:10:08.023 "dhgroup": "null" 00:10:08.023 } 00:10:08.023 } 00:10:08.023 ]' 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.023 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.282 11:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:09.218 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:09.793 00:10:09.793 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:09.793 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:09.793 11:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:10.051 { 00:10:10.051 "cntlid": 7, 00:10:10.051 "qid": 0, 00:10:10.051 "state": "enabled", 00:10:10.051 "thread": "nvmf_tgt_poll_group_000", 00:10:10.051 "listen_address": { 00:10:10.051 "trtype": "TCP", 00:10:10.051 "adrfam": "IPv4", 00:10:10.051 "traddr": "10.0.0.2", 00:10:10.051 "trsvcid": "4420" 00:10:10.051 }, 00:10:10.051 "peer_address": { 00:10:10.051 "trtype": "TCP", 00:10:10.051 "adrfam": "IPv4", 00:10:10.051 "traddr": "10.0.0.1", 00:10:10.051 "trsvcid": "45144" 00:10:10.051 }, 00:10:10.051 "auth": { 00:10:10.051 "state": "completed", 00:10:10.051 "digest": "sha256", 00:10:10.051 "dhgroup": "null" 00:10:10.051 } 00:10:10.051 } 00:10:10.051 ]' 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.051 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.309 11:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:10:11.243 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.243 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:11.243 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.243 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.243 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.243 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:11.243 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:11.243 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:11.243 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.502 11:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.760 00:10:11.760 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:11.760 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:11.760 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:12.018 { 00:10:12.018 "cntlid": 9, 00:10:12.018 "qid": 0, 00:10:12.018 "state": "enabled", 00:10:12.018 "thread": "nvmf_tgt_poll_group_000", 00:10:12.018 "listen_address": { 00:10:12.018 "trtype": "TCP", 00:10:12.018 "adrfam": "IPv4", 00:10:12.018 "traddr": "10.0.0.2", 00:10:12.018 "trsvcid": "4420" 00:10:12.018 }, 00:10:12.018 "peer_address": { 00:10:12.018 "trtype": "TCP", 00:10:12.018 "adrfam": "IPv4", 00:10:12.018 "traddr": "10.0.0.1", 00:10:12.018 "trsvcid": "45168" 00:10:12.018 }, 00:10:12.018 "auth": { 00:10:12.018 "state": "completed", 00:10:12.018 "digest": "sha256", 00:10:12.018 "dhgroup": "ffdhe2048" 00:10:12.018 } 00:10:12.018 } 00:10:12.018 ]' 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:12.018 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:12.277 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:12.277 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.277 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.277 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.535 11:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:10:13.101 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.101 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:13.101 11:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.101 11:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.101 11:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.101 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:13.101 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:13.101 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.359 11:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.616 00:10:13.616 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:13.616 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.874 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:14.132 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.132 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.132 11:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.132 11:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.132 11:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.133 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:14.133 { 00:10:14.133 "cntlid": 11, 00:10:14.133 "qid": 0, 00:10:14.133 "state": "enabled", 00:10:14.133 "thread": "nvmf_tgt_poll_group_000", 00:10:14.133 "listen_address": { 00:10:14.133 "trtype": "TCP", 00:10:14.133 "adrfam": "IPv4", 00:10:14.133 "traddr": "10.0.0.2", 00:10:14.133 "trsvcid": "4420" 00:10:14.133 }, 00:10:14.133 "peer_address": { 00:10:14.133 "trtype": "TCP", 00:10:14.133 "adrfam": "IPv4", 00:10:14.133 "traddr": "10.0.0.1", 00:10:14.133 "trsvcid": "45184" 00:10:14.133 }, 00:10:14.133 "auth": { 00:10:14.133 "state": "completed", 00:10:14.133 "digest": "sha256", 00:10:14.133 "dhgroup": "ffdhe2048" 00:10:14.133 } 00:10:14.133 } 00:10:14.133 ]' 00:10:14.133 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:14.133 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.133 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:14.133 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:14.133 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:14.133 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.133 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.133 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.390 11:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:10:15.319 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.319 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:15.319 11:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.319 11:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.319 11:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.319 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:15.319 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:15.320 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.578 11:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.838 00:10:15.838 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:15.838 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.838 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:16.096 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.096 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.096 11:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.096 11:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.096 11:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.096 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:16.096 { 00:10:16.096 "cntlid": 13, 00:10:16.096 "qid": 0, 00:10:16.096 "state": "enabled", 00:10:16.096 "thread": "nvmf_tgt_poll_group_000", 00:10:16.096 "listen_address": { 00:10:16.096 "trtype": "TCP", 00:10:16.096 "adrfam": "IPv4", 00:10:16.096 "traddr": "10.0.0.2", 00:10:16.096 "trsvcid": "4420" 00:10:16.096 }, 00:10:16.096 "peer_address": { 00:10:16.096 "trtype": "TCP", 00:10:16.096 "adrfam": "IPv4", 00:10:16.096 "traddr": "10.0.0.1", 00:10:16.096 "trsvcid": "35892" 00:10:16.096 }, 00:10:16.096 "auth": { 00:10:16.096 "state": "completed", 00:10:16.096 "digest": "sha256", 00:10:16.096 "dhgroup": "ffdhe2048" 00:10:16.096 } 00:10:16.096 } 00:10:16.096 ]' 00:10:16.096 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:16.096 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.096 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:16.354 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:16.354 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:16.354 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.354 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.354 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.612 11:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:10:17.179 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.179 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:17.179 11:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.179 11:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.179 11:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.179 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:17.179 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:17.179 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:17.437 11:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:18.003 00:10:18.003 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:18.003 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:18.003 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.003 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.003 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.003 11:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.003 11:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.003 11:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.262 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:18.262 { 00:10:18.262 "cntlid": 15, 00:10:18.262 "qid": 0, 00:10:18.262 "state": "enabled", 00:10:18.262 "thread": "nvmf_tgt_poll_group_000", 00:10:18.262 "listen_address": { 00:10:18.262 "trtype": "TCP", 00:10:18.262 "adrfam": "IPv4", 00:10:18.262 "traddr": "10.0.0.2", 00:10:18.262 "trsvcid": "4420" 00:10:18.262 }, 00:10:18.262 "peer_address": { 00:10:18.262 "trtype": "TCP", 00:10:18.262 "adrfam": "IPv4", 00:10:18.262 "traddr": "10.0.0.1", 00:10:18.262 "trsvcid": "35928" 00:10:18.262 }, 00:10:18.262 "auth": { 00:10:18.262 "state": "completed", 00:10:18.262 "digest": "sha256", 00:10:18.262 "dhgroup": "ffdhe2048" 00:10:18.262 } 00:10:18.262 } 00:10:18.262 ]' 00:10:18.262 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:18.262 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.262 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:18.262 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:18.262 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:18.262 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.262 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.262 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.520 11:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.478 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.479 11:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.077 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:20.077 { 00:10:20.077 "cntlid": 17, 00:10:20.077 "qid": 0, 00:10:20.077 "state": "enabled", 00:10:20.077 "thread": "nvmf_tgt_poll_group_000", 00:10:20.077 "listen_address": { 00:10:20.077 "trtype": "TCP", 00:10:20.077 "adrfam": "IPv4", 00:10:20.077 "traddr": "10.0.0.2", 00:10:20.077 "trsvcid": "4420" 00:10:20.077 }, 00:10:20.077 "peer_address": { 00:10:20.077 "trtype": "TCP", 00:10:20.077 "adrfam": "IPv4", 00:10:20.077 "traddr": "10.0.0.1", 00:10:20.077 "trsvcid": "35946" 00:10:20.077 }, 00:10:20.077 "auth": { 00:10:20.077 "state": "completed", 00:10:20.077 "digest": "sha256", 00:10:20.077 "dhgroup": "ffdhe3072" 00:10:20.077 } 00:10:20.077 } 00:10:20.077 ]' 00:10:20.077 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:20.334 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.334 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:20.334 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:20.334 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:20.334 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.334 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.334 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.591 11:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:10:21.158 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.158 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:21.158 11:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.158 11:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.158 11:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.158 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:21.158 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:21.158 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.416 11:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.982 00:10:21.982 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:21.982 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.982 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:22.240 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.240 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.240 11:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.240 11:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.240 11:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.240 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:22.240 { 00:10:22.240 "cntlid": 19, 00:10:22.240 "qid": 0, 00:10:22.240 "state": "enabled", 00:10:22.240 "thread": "nvmf_tgt_poll_group_000", 00:10:22.240 "listen_address": { 00:10:22.240 "trtype": "TCP", 00:10:22.240 "adrfam": "IPv4", 00:10:22.240 "traddr": "10.0.0.2", 00:10:22.240 "trsvcid": "4420" 00:10:22.240 }, 00:10:22.240 "peer_address": { 00:10:22.240 "trtype": "TCP", 00:10:22.240 "adrfam": "IPv4", 00:10:22.240 "traddr": "10.0.0.1", 00:10:22.240 "trsvcid": "35984" 00:10:22.240 }, 00:10:22.240 "auth": { 00:10:22.240 "state": "completed", 00:10:22.240 "digest": "sha256", 00:10:22.240 "dhgroup": "ffdhe3072" 00:10:22.240 } 00:10:22.240 } 00:10:22.240 ]' 00:10:22.240 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:22.241 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.241 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:22.241 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:22.241 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:22.499 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.499 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.499 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.757 11:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:10:23.323 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.323 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:23.323 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.323 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.323 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.323 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.324 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:23.324 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.581 11:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.839 00:10:23.839 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:23.839 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.839 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.097 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.097 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.097 11:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.097 11:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.097 11:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.097 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:24.097 { 00:10:24.097 "cntlid": 21, 00:10:24.097 "qid": 0, 00:10:24.097 "state": "enabled", 00:10:24.097 "thread": "nvmf_tgt_poll_group_000", 00:10:24.097 "listen_address": { 00:10:24.097 "trtype": "TCP", 00:10:24.097 "adrfam": "IPv4", 00:10:24.097 "traddr": "10.0.0.2", 00:10:24.097 "trsvcid": "4420" 00:10:24.097 }, 00:10:24.097 "peer_address": { 00:10:24.097 "trtype": "TCP", 00:10:24.097 "adrfam": "IPv4", 00:10:24.097 "traddr": "10.0.0.1", 00:10:24.097 "trsvcid": "36010" 00:10:24.097 }, 00:10:24.097 "auth": { 00:10:24.097 "state": "completed", 00:10:24.097 "digest": "sha256", 00:10:24.097 "dhgroup": "ffdhe3072" 00:10:24.097 } 00:10:24.097 } 00:10:24.097 ]' 00:10:24.097 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:24.097 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.097 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:24.361 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:24.361 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:24.361 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.361 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.361 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.618 11:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:10:25.184 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.185 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:25.185 11:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.185 11:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.185 11:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.185 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.185 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:25.185 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.751 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:25.752 11:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:26.009 00:10:26.009 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:26.009 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:26.009 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:26.267 { 00:10:26.267 "cntlid": 23, 00:10:26.267 "qid": 0, 00:10:26.267 "state": "enabled", 00:10:26.267 "thread": "nvmf_tgt_poll_group_000", 00:10:26.267 "listen_address": { 00:10:26.267 "trtype": "TCP", 00:10:26.267 "adrfam": "IPv4", 00:10:26.267 "traddr": "10.0.0.2", 00:10:26.267 "trsvcid": "4420" 00:10:26.267 }, 00:10:26.267 "peer_address": { 00:10:26.267 "trtype": "TCP", 00:10:26.267 "adrfam": "IPv4", 00:10:26.267 "traddr": "10.0.0.1", 00:10:26.267 "trsvcid": "33168" 00:10:26.267 }, 00:10:26.267 "auth": { 00:10:26.267 "state": "completed", 00:10:26.267 "digest": "sha256", 00:10:26.267 "dhgroup": "ffdhe3072" 00:10:26.267 } 00:10:26.267 } 00:10:26.267 ]' 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.267 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.525 11:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:10:27.091 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.091 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:27.091 11:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.091 11:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.348 11:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.348 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:27.348 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:27.348 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:27.348 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.606 11:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.863 00:10:27.863 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:27.863 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:27.863 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.121 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.121 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.121 11:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.121 11:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.121 11:34:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.121 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.121 { 00:10:28.121 "cntlid": 25, 00:10:28.121 "qid": 0, 00:10:28.121 "state": "enabled", 00:10:28.121 "thread": "nvmf_tgt_poll_group_000", 00:10:28.121 "listen_address": { 00:10:28.121 "trtype": "TCP", 00:10:28.121 "adrfam": "IPv4", 00:10:28.121 "traddr": "10.0.0.2", 00:10:28.121 "trsvcid": "4420" 00:10:28.121 }, 00:10:28.121 "peer_address": { 00:10:28.121 "trtype": "TCP", 00:10:28.121 "adrfam": "IPv4", 00:10:28.121 "traddr": "10.0.0.1", 00:10:28.121 "trsvcid": "33190" 00:10:28.121 }, 00:10:28.121 "auth": { 00:10:28.121 "state": "completed", 00:10:28.121 "digest": "sha256", 00:10:28.121 "dhgroup": "ffdhe4096" 00:10:28.121 } 00:10:28.121 } 00:10:28.121 ]' 00:10:28.122 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:28.122 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.122 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:28.122 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:28.122 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:28.379 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.379 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.379 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.637 11:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:10:29.203 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.203 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:29.203 11:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.203 11:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.203 11:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.203 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.203 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:29.203 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:29.463 11:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.029 00:10:30.029 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.029 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.029 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.029 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.029 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.029 11:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.029 11:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.288 { 00:10:30.288 "cntlid": 27, 00:10:30.288 "qid": 0, 00:10:30.288 "state": "enabled", 00:10:30.288 "thread": "nvmf_tgt_poll_group_000", 00:10:30.288 "listen_address": { 00:10:30.288 "trtype": "TCP", 00:10:30.288 "adrfam": "IPv4", 00:10:30.288 "traddr": "10.0.0.2", 00:10:30.288 "trsvcid": "4420" 00:10:30.288 }, 00:10:30.288 "peer_address": { 00:10:30.288 "trtype": "TCP", 00:10:30.288 "adrfam": "IPv4", 00:10:30.288 "traddr": "10.0.0.1", 00:10:30.288 "trsvcid": "33214" 00:10:30.288 }, 00:10:30.288 "auth": { 00:10:30.288 "state": "completed", 00:10:30.288 "digest": "sha256", 00:10:30.288 "dhgroup": "ffdhe4096" 00:10:30.288 } 00:10:30.288 } 00:10:30.288 ]' 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.288 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.546 11:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:10:31.481 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.481 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:31.481 11:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.481 11:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.481 11:34:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.481 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.481 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:31.481 11:34:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.740 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.998 00:10:32.256 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:32.256 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:32.256 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.514 { 00:10:32.514 "cntlid": 29, 00:10:32.514 "qid": 0, 00:10:32.514 "state": "enabled", 00:10:32.514 "thread": "nvmf_tgt_poll_group_000", 00:10:32.514 "listen_address": { 00:10:32.514 "trtype": "TCP", 00:10:32.514 "adrfam": "IPv4", 00:10:32.514 "traddr": "10.0.0.2", 00:10:32.514 "trsvcid": "4420" 00:10:32.514 }, 00:10:32.514 "peer_address": { 00:10:32.514 "trtype": "TCP", 00:10:32.514 "adrfam": "IPv4", 00:10:32.514 "traddr": "10.0.0.1", 00:10:32.514 "trsvcid": "33244" 00:10:32.514 }, 00:10:32.514 "auth": { 00:10:32.514 "state": "completed", 00:10:32.514 "digest": "sha256", 00:10:32.514 "dhgroup": "ffdhe4096" 00:10:32.514 } 00:10:32.514 } 00:10:32.514 ]' 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:32.514 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.515 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.515 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.515 11:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.772 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:10:33.707 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.707 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:33.707 11:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.707 11:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.707 11:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.707 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.707 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.707 11:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:33.966 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:34.225 00:10:34.225 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:34.225 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.225 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.484 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.484 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.484 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.484 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.484 11:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.484 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:34.484 { 00:10:34.484 "cntlid": 31, 00:10:34.484 "qid": 0, 00:10:34.484 "state": "enabled", 00:10:34.484 "thread": "nvmf_tgt_poll_group_000", 00:10:34.484 "listen_address": { 00:10:34.484 "trtype": "TCP", 00:10:34.484 "adrfam": "IPv4", 00:10:34.484 "traddr": "10.0.0.2", 00:10:34.484 "trsvcid": "4420" 00:10:34.484 }, 00:10:34.484 "peer_address": { 00:10:34.484 "trtype": "TCP", 00:10:34.484 "adrfam": "IPv4", 00:10:34.484 "traddr": "10.0.0.1", 00:10:34.484 "trsvcid": "52502" 00:10:34.484 }, 00:10:34.484 "auth": { 00:10:34.484 "state": "completed", 00:10:34.484 "digest": "sha256", 00:10:34.484 "dhgroup": "ffdhe4096" 00:10:34.484 } 00:10:34.484 } 00:10:34.484 ]' 00:10:34.484 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:34.742 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.742 11:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:34.742 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:34.742 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:34.742 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.742 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.742 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.001 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:10:35.568 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.568 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:35.568 11:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.568 11:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.568 11:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.568 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:35.568 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:35.568 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:35.568 11:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:35.826 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:35.826 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.826 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:35.826 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:35.826 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:35.826 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.826 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.826 11:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.827 11:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.827 11:34:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.827 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.827 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.395 00:10:36.395 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.395 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.395 11:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.653 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.653 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.653 11:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.653 11:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.653 11:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.653 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.653 { 00:10:36.653 "cntlid": 33, 00:10:36.653 "qid": 0, 00:10:36.653 "state": "enabled", 00:10:36.653 "thread": "nvmf_tgt_poll_group_000", 00:10:36.653 "listen_address": { 00:10:36.653 "trtype": "TCP", 00:10:36.653 "adrfam": "IPv4", 00:10:36.653 "traddr": "10.0.0.2", 00:10:36.653 "trsvcid": "4420" 00:10:36.653 }, 00:10:36.653 "peer_address": { 00:10:36.653 "trtype": "TCP", 00:10:36.653 "adrfam": "IPv4", 00:10:36.653 "traddr": "10.0.0.1", 00:10:36.653 "trsvcid": "52532" 00:10:36.653 }, 00:10:36.653 "auth": { 00:10:36.653 "state": "completed", 00:10:36.653 "digest": "sha256", 00:10:36.653 "dhgroup": "ffdhe6144" 00:10:36.653 } 00:10:36.653 } 00:10:36.653 ]' 00:10:36.653 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.912 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.912 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:36.912 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:36.912 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:36.912 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.912 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.912 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.171 11:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:10:37.764 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.764 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:37.764 11:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.764 11:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.764 11:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.764 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.764 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:37.764 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.024 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.591 00:10:38.591 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.591 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.591 11:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.849 { 00:10:38.849 "cntlid": 35, 00:10:38.849 "qid": 0, 00:10:38.849 "state": "enabled", 00:10:38.849 "thread": "nvmf_tgt_poll_group_000", 00:10:38.849 "listen_address": { 00:10:38.849 "trtype": "TCP", 00:10:38.849 "adrfam": "IPv4", 00:10:38.849 "traddr": "10.0.0.2", 00:10:38.849 "trsvcid": "4420" 00:10:38.849 }, 00:10:38.849 "peer_address": { 00:10:38.849 "trtype": "TCP", 00:10:38.849 "adrfam": "IPv4", 00:10:38.849 "traddr": "10.0.0.1", 00:10:38.849 "trsvcid": "52570" 00:10:38.849 }, 00:10:38.849 "auth": { 00:10:38.849 "state": "completed", 00:10:38.849 "digest": "sha256", 00:10:38.849 "dhgroup": "ffdhe6144" 00:10:38.849 } 00:10:38.849 } 00:10:38.849 ]' 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.849 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.850 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.107 11:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.040 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.602 00:10:40.602 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:40.602 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.602 11:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.859 { 00:10:40.859 "cntlid": 37, 00:10:40.859 "qid": 0, 00:10:40.859 "state": "enabled", 00:10:40.859 "thread": "nvmf_tgt_poll_group_000", 00:10:40.859 "listen_address": { 00:10:40.859 "trtype": "TCP", 00:10:40.859 "adrfam": "IPv4", 00:10:40.859 "traddr": "10.0.0.2", 00:10:40.859 "trsvcid": "4420" 00:10:40.859 }, 00:10:40.859 "peer_address": { 00:10:40.859 "trtype": "TCP", 00:10:40.859 "adrfam": "IPv4", 00:10:40.859 "traddr": "10.0.0.1", 00:10:40.859 "trsvcid": "52592" 00:10:40.859 }, 00:10:40.859 "auth": { 00:10:40.859 "state": "completed", 00:10:40.859 "digest": "sha256", 00:10:40.859 "dhgroup": "ffdhe6144" 00:10:40.859 } 00:10:40.859 } 00:10:40.859 ]' 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.859 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.116 11:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.048 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.614 00:10:42.614 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.614 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.614 11:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.873 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.873 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.873 11:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.873 11:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.873 11:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.873 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.873 { 00:10:42.873 "cntlid": 39, 00:10:42.873 "qid": 0, 00:10:42.873 "state": "enabled", 00:10:42.873 "thread": "nvmf_tgt_poll_group_000", 00:10:42.873 "listen_address": { 00:10:42.873 "trtype": "TCP", 00:10:42.873 "adrfam": "IPv4", 00:10:42.873 "traddr": "10.0.0.2", 00:10:42.873 "trsvcid": "4420" 00:10:42.873 }, 00:10:42.873 "peer_address": { 00:10:42.873 "trtype": "TCP", 00:10:42.873 "adrfam": "IPv4", 00:10:42.873 "traddr": "10.0.0.1", 00:10:42.873 "trsvcid": "52626" 00:10:42.873 }, 00:10:42.873 "auth": { 00:10:42.873 "state": "completed", 00:10:42.873 "digest": "sha256", 00:10:42.873 "dhgroup": "ffdhe6144" 00:10:42.873 } 00:10:42.873 } 00:10:42.873 ]' 00:10:42.873 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.873 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.873 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.131 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:43.131 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.131 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.131 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.131 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.390 11:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:10:43.956 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.956 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:43.956 11:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.956 11:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.956 11:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.956 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:43.956 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.956 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:43.956 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.214 11:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.781 00:10:44.781 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.781 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.781 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.040 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.040 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.040 11:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.040 11:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.040 11:34:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.040 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.040 { 00:10:45.040 "cntlid": 41, 00:10:45.040 "qid": 0, 00:10:45.040 "state": "enabled", 00:10:45.040 "thread": "nvmf_tgt_poll_group_000", 00:10:45.040 "listen_address": { 00:10:45.040 "trtype": "TCP", 00:10:45.040 "adrfam": "IPv4", 00:10:45.040 "traddr": "10.0.0.2", 00:10:45.040 "trsvcid": "4420" 00:10:45.040 }, 00:10:45.040 "peer_address": { 00:10:45.040 "trtype": "TCP", 00:10:45.040 "adrfam": "IPv4", 00:10:45.040 "traddr": "10.0.0.1", 00:10:45.040 "trsvcid": "44036" 00:10:45.040 }, 00:10:45.040 "auth": { 00:10:45.040 "state": "completed", 00:10:45.040 "digest": "sha256", 00:10:45.040 "dhgroup": "ffdhe8192" 00:10:45.040 } 00:10:45.040 } 00:10:45.040 ]' 00:10:45.040 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.299 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.299 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.299 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:45.299 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.299 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.299 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.299 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.557 11:34:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:10:46.125 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.125 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:46.125 11:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.125 11:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.125 11:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.125 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:46.125 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:46.125 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:46.383 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:46.383 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.383 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:46.383 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:46.383 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:46.383 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.384 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.384 11:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.384 11:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.384 11:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.384 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.384 11:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.949 00:10:46.949 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.949 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.949 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.208 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.208 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.208 11:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.208 11:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.208 11:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.208 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.208 { 00:10:47.208 "cntlid": 43, 00:10:47.208 "qid": 0, 00:10:47.208 "state": "enabled", 00:10:47.208 "thread": "nvmf_tgt_poll_group_000", 00:10:47.208 "listen_address": { 00:10:47.208 "trtype": "TCP", 00:10:47.208 "adrfam": "IPv4", 00:10:47.208 "traddr": "10.0.0.2", 00:10:47.208 "trsvcid": "4420" 00:10:47.208 }, 00:10:47.208 "peer_address": { 00:10:47.208 "trtype": "TCP", 00:10:47.208 "adrfam": "IPv4", 00:10:47.208 "traddr": "10.0.0.1", 00:10:47.208 "trsvcid": "44064" 00:10:47.208 }, 00:10:47.208 "auth": { 00:10:47.208 "state": "completed", 00:10:47.208 "digest": "sha256", 00:10:47.208 "dhgroup": "ffdhe8192" 00:10:47.208 } 00:10:47.208 } 00:10:47.208 ]' 00:10:47.208 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:47.466 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.466 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:47.466 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:47.466 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:47.466 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.466 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.466 11:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.724 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:10:48.657 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.657 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:48.657 11:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.657 11:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.657 11:34:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.657 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.657 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.657 11:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.657 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:48.657 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.657 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:48.657 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:48.657 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:48.658 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.658 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.658 11:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.658 11:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.658 11:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.658 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.658 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.591 00:10:49.591 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.591 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.591 11:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.591 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.591 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.591 11:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.591 11:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.591 11:34:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.591 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.591 { 00:10:49.591 "cntlid": 45, 00:10:49.591 "qid": 0, 00:10:49.591 "state": "enabled", 00:10:49.591 "thread": "nvmf_tgt_poll_group_000", 00:10:49.591 "listen_address": { 00:10:49.591 "trtype": "TCP", 00:10:49.591 "adrfam": "IPv4", 00:10:49.591 "traddr": "10.0.0.2", 00:10:49.591 "trsvcid": "4420" 00:10:49.591 }, 00:10:49.591 "peer_address": { 00:10:49.591 "trtype": "TCP", 00:10:49.591 "adrfam": "IPv4", 00:10:49.591 "traddr": "10.0.0.1", 00:10:49.591 "trsvcid": "44076" 00:10:49.591 }, 00:10:49.591 "auth": { 00:10:49.591 "state": "completed", 00:10:49.591 "digest": "sha256", 00:10:49.591 "dhgroup": "ffdhe8192" 00:10:49.591 } 00:10:49.591 } 00:10:49.591 ]' 00:10:49.591 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.849 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.849 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.849 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:49.849 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.849 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.849 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.849 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.107 11:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:10:50.672 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.672 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:50.672 11:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.672 11:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.672 11:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.672 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.672 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.672 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.238 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.804 00:10:51.804 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.804 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.804 11:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.804 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.804 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.804 11:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.804 11:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.804 11:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.804 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.804 { 00:10:51.804 "cntlid": 47, 00:10:51.804 "qid": 0, 00:10:51.804 "state": "enabled", 00:10:51.804 "thread": "nvmf_tgt_poll_group_000", 00:10:51.804 "listen_address": { 00:10:51.804 "trtype": "TCP", 00:10:51.804 "adrfam": "IPv4", 00:10:51.804 "traddr": "10.0.0.2", 00:10:51.804 "trsvcid": "4420" 00:10:51.804 }, 00:10:51.804 "peer_address": { 00:10:51.804 "trtype": "TCP", 00:10:51.804 "adrfam": "IPv4", 00:10:51.804 "traddr": "10.0.0.1", 00:10:51.804 "trsvcid": "44100" 00:10:51.804 }, 00:10:51.804 "auth": { 00:10:51.804 "state": "completed", 00:10:51.804 "digest": "sha256", 00:10:51.804 "dhgroup": "ffdhe8192" 00:10:51.804 } 00:10:51.804 } 00:10:51.804 ]' 00:10:51.804 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.063 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.063 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.063 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:52.063 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.063 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.063 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.063 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.321 11:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:52.924 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.182 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.750 00:10:53.750 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.750 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.750 11:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.750 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.750 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.750 11:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.751 11:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.751 11:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.751 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.751 { 00:10:53.751 "cntlid": 49, 00:10:53.751 "qid": 0, 00:10:53.751 "state": "enabled", 00:10:53.751 "thread": "nvmf_tgt_poll_group_000", 00:10:53.751 "listen_address": { 00:10:53.751 "trtype": "TCP", 00:10:53.751 "adrfam": "IPv4", 00:10:53.751 "traddr": "10.0.0.2", 00:10:53.751 "trsvcid": "4420" 00:10:53.751 }, 00:10:53.751 "peer_address": { 00:10:53.751 "trtype": "TCP", 00:10:53.751 "adrfam": "IPv4", 00:10:53.751 "traddr": "10.0.0.1", 00:10:53.751 "trsvcid": "44130" 00:10:53.751 }, 00:10:53.751 "auth": { 00:10:53.751 "state": "completed", 00:10:53.751 "digest": "sha384", 00:10:53.751 "dhgroup": "null" 00:10:53.751 } 00:10:53.751 } 00:10:53.751 ]' 00:10:53.751 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.009 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.009 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.009 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:54.009 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.009 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.009 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.009 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.267 11:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:10:54.835 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.835 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:54.835 11:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.835 11:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.835 11:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.835 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.835 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:54.835 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.096 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.357 00:10:55.357 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.357 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.357 11:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.620 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.620 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.620 11:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.620 11:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.620 11:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.620 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.620 { 00:10:55.620 "cntlid": 51, 00:10:55.620 "qid": 0, 00:10:55.621 "state": "enabled", 00:10:55.621 "thread": "nvmf_tgt_poll_group_000", 00:10:55.621 "listen_address": { 00:10:55.621 "trtype": "TCP", 00:10:55.621 "adrfam": "IPv4", 00:10:55.621 "traddr": "10.0.0.2", 00:10:55.621 "trsvcid": "4420" 00:10:55.621 }, 00:10:55.621 "peer_address": { 00:10:55.621 "trtype": "TCP", 00:10:55.621 "adrfam": "IPv4", 00:10:55.621 "traddr": "10.0.0.1", 00:10:55.621 "trsvcid": "60378" 00:10:55.621 }, 00:10:55.621 "auth": { 00:10:55.621 "state": "completed", 00:10:55.621 "digest": "sha384", 00:10:55.621 "dhgroup": "null" 00:10:55.621 } 00:10:55.621 } 00:10:55.621 ]' 00:10:55.621 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:55.886 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:55.886 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.886 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:55.886 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.886 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.886 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.886 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.147 11:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.095 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.354 00:10:57.355 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.355 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.355 11:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.922 { 00:10:57.922 "cntlid": 53, 00:10:57.922 "qid": 0, 00:10:57.922 "state": "enabled", 00:10:57.922 "thread": "nvmf_tgt_poll_group_000", 00:10:57.922 "listen_address": { 00:10:57.922 "trtype": "TCP", 00:10:57.922 "adrfam": "IPv4", 00:10:57.922 "traddr": "10.0.0.2", 00:10:57.922 "trsvcid": "4420" 00:10:57.922 }, 00:10:57.922 "peer_address": { 00:10:57.922 "trtype": "TCP", 00:10:57.922 "adrfam": "IPv4", 00:10:57.922 "traddr": "10.0.0.1", 00:10:57.922 "trsvcid": "60406" 00:10:57.922 }, 00:10:57.922 "auth": { 00:10:57.922 "state": "completed", 00:10:57.922 "digest": "sha384", 00:10:57.922 "dhgroup": "null" 00:10:57.922 } 00:10:57.922 } 00:10:57.922 ]' 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.922 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.179 11:35:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:10:58.743 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.743 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:10:58.743 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.743 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.743 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.743 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.743 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:58.743 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:59.002 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:59.569 00:10:59.569 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.569 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.569 11:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.569 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.569 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.569 11:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.569 11:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.826 11:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.826 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.826 { 00:10:59.826 "cntlid": 55, 00:10:59.826 "qid": 0, 00:10:59.826 "state": "enabled", 00:10:59.826 "thread": "nvmf_tgt_poll_group_000", 00:10:59.826 "listen_address": { 00:10:59.826 "trtype": "TCP", 00:10:59.826 "adrfam": "IPv4", 00:10:59.826 "traddr": "10.0.0.2", 00:10:59.826 "trsvcid": "4420" 00:10:59.826 }, 00:10:59.826 "peer_address": { 00:10:59.826 "trtype": "TCP", 00:10:59.826 "adrfam": "IPv4", 00:10:59.826 "traddr": "10.0.0.1", 00:10:59.826 "trsvcid": "60426" 00:10:59.826 }, 00:10:59.826 "auth": { 00:10:59.826 "state": "completed", 00:10:59.826 "digest": "sha384", 00:10:59.826 "dhgroup": "null" 00:10:59.826 } 00:10:59.826 } 00:10:59.826 ]' 00:10:59.826 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.826 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.826 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.826 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:59.826 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.826 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.827 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.827 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.083 11:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:11:00.649 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.649 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:00.649 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.649 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.649 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.649 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.649 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.649 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:00.649 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.907 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.474 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.474 { 00:11:01.474 "cntlid": 57, 00:11:01.474 "qid": 0, 00:11:01.474 "state": "enabled", 00:11:01.474 "thread": "nvmf_tgt_poll_group_000", 00:11:01.474 "listen_address": { 00:11:01.474 "trtype": "TCP", 00:11:01.474 "adrfam": "IPv4", 00:11:01.474 "traddr": "10.0.0.2", 00:11:01.474 "trsvcid": "4420" 00:11:01.474 }, 00:11:01.474 "peer_address": { 00:11:01.474 "trtype": "TCP", 00:11:01.474 "adrfam": "IPv4", 00:11:01.474 "traddr": "10.0.0.1", 00:11:01.474 "trsvcid": "60444" 00:11:01.474 }, 00:11:01.474 "auth": { 00:11:01.474 "state": "completed", 00:11:01.474 "digest": "sha384", 00:11:01.474 "dhgroup": "ffdhe2048" 00:11:01.474 } 00:11:01.474 } 00:11:01.474 ]' 00:11:01.474 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.733 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.733 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.733 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:01.733 11:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.733 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.733 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.733 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.992 11:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.926 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.492 00:11:03.492 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.492 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.492 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.750 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.750 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.750 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.750 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.750 11:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.750 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.750 { 00:11:03.750 "cntlid": 59, 00:11:03.750 "qid": 0, 00:11:03.750 "state": "enabled", 00:11:03.750 "thread": "nvmf_tgt_poll_group_000", 00:11:03.750 "listen_address": { 00:11:03.750 "trtype": "TCP", 00:11:03.750 "adrfam": "IPv4", 00:11:03.750 "traddr": "10.0.0.2", 00:11:03.750 "trsvcid": "4420" 00:11:03.750 }, 00:11:03.750 "peer_address": { 00:11:03.750 "trtype": "TCP", 00:11:03.750 "adrfam": "IPv4", 00:11:03.750 "traddr": "10.0.0.1", 00:11:03.750 "trsvcid": "60464" 00:11:03.750 }, 00:11:03.750 "auth": { 00:11:03.750 "state": "completed", 00:11:03.750 "digest": "sha384", 00:11:03.750 "dhgroup": "ffdhe2048" 00:11:03.750 } 00:11:03.750 } 00:11:03.750 ]' 00:11:03.750 11:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.750 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.750 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.750 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:03.750 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.750 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.750 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.750 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.008 11:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.941 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.942 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.942 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.199 00:11:05.199 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.199 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.199 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.457 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.457 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.457 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.457 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.457 11:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.457 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.457 { 00:11:05.457 "cntlid": 61, 00:11:05.457 "qid": 0, 00:11:05.457 "state": "enabled", 00:11:05.457 "thread": "nvmf_tgt_poll_group_000", 00:11:05.457 "listen_address": { 00:11:05.457 "trtype": "TCP", 00:11:05.457 "adrfam": "IPv4", 00:11:05.457 "traddr": "10.0.0.2", 00:11:05.457 "trsvcid": "4420" 00:11:05.457 }, 00:11:05.457 "peer_address": { 00:11:05.457 "trtype": "TCP", 00:11:05.457 "adrfam": "IPv4", 00:11:05.457 "traddr": "10.0.0.1", 00:11:05.457 "trsvcid": "58058" 00:11:05.457 }, 00:11:05.457 "auth": { 00:11:05.457 "state": "completed", 00:11:05.457 "digest": "sha384", 00:11:05.457 "dhgroup": "ffdhe2048" 00:11:05.457 } 00:11:05.457 } 00:11:05.457 ]' 00:11:05.457 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.715 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.715 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.715 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:05.715 11:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.715 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.715 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.715 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.973 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:11:06.538 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.539 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:06.539 11:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.539 11:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.539 11:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.539 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.539 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:06.539 11:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:06.797 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.363 00:11:07.363 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.363 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.363 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.621 { 00:11:07.621 "cntlid": 63, 00:11:07.621 "qid": 0, 00:11:07.621 "state": "enabled", 00:11:07.621 "thread": "nvmf_tgt_poll_group_000", 00:11:07.621 "listen_address": { 00:11:07.621 "trtype": "TCP", 00:11:07.621 "adrfam": "IPv4", 00:11:07.621 "traddr": "10.0.0.2", 00:11:07.621 "trsvcid": "4420" 00:11:07.621 }, 00:11:07.621 "peer_address": { 00:11:07.621 "trtype": "TCP", 00:11:07.621 "adrfam": "IPv4", 00:11:07.621 "traddr": "10.0.0.1", 00:11:07.621 "trsvcid": "58088" 00:11:07.621 }, 00:11:07.621 "auth": { 00:11:07.621 "state": "completed", 00:11:07.621 "digest": "sha384", 00:11:07.621 "dhgroup": "ffdhe2048" 00:11:07.621 } 00:11:07.621 } 00:11:07.621 ]' 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:07.621 11:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.621 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.621 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.621 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.188 11:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:11:08.780 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.780 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:08.780 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.780 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.780 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.780 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.780 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.780 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:08.780 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.038 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.296 00:11:09.296 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.296 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.296 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.554 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.554 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.554 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.554 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.554 11:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.554 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.554 { 00:11:09.554 "cntlid": 65, 00:11:09.554 "qid": 0, 00:11:09.554 "state": "enabled", 00:11:09.554 "thread": "nvmf_tgt_poll_group_000", 00:11:09.554 "listen_address": { 00:11:09.554 "trtype": "TCP", 00:11:09.554 "adrfam": "IPv4", 00:11:09.554 "traddr": "10.0.0.2", 00:11:09.554 "trsvcid": "4420" 00:11:09.554 }, 00:11:09.554 "peer_address": { 00:11:09.554 "trtype": "TCP", 00:11:09.554 "adrfam": "IPv4", 00:11:09.554 "traddr": "10.0.0.1", 00:11:09.554 "trsvcid": "58112" 00:11:09.554 }, 00:11:09.554 "auth": { 00:11:09.554 "state": "completed", 00:11:09.554 "digest": "sha384", 00:11:09.554 "dhgroup": "ffdhe3072" 00:11:09.554 } 00:11:09.554 } 00:11:09.554 ]' 00:11:09.554 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.554 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.554 11:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.811 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:09.811 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.811 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.811 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.811 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.069 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:11:10.633 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.633 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:10.633 11:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.633 11:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.633 11:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.633 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.633 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:10.633 11:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:10.890 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:10.890 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.891 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.147 00:11:11.147 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.147 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.147 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.405 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.405 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.405 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.405 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.405 11:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.405 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.405 { 00:11:11.405 "cntlid": 67, 00:11:11.405 "qid": 0, 00:11:11.405 "state": "enabled", 00:11:11.405 "thread": "nvmf_tgt_poll_group_000", 00:11:11.405 "listen_address": { 00:11:11.405 "trtype": "TCP", 00:11:11.405 "adrfam": "IPv4", 00:11:11.405 "traddr": "10.0.0.2", 00:11:11.405 "trsvcid": "4420" 00:11:11.405 }, 00:11:11.405 "peer_address": { 00:11:11.405 "trtype": "TCP", 00:11:11.405 "adrfam": "IPv4", 00:11:11.405 "traddr": "10.0.0.1", 00:11:11.405 "trsvcid": "58130" 00:11:11.405 }, 00:11:11.405 "auth": { 00:11:11.405 "state": "completed", 00:11:11.405 "digest": "sha384", 00:11:11.405 "dhgroup": "ffdhe3072" 00:11:11.405 } 00:11:11.405 } 00:11:11.405 ]' 00:11:11.405 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.662 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.662 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.662 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:11.662 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.662 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.662 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.662 11:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.919 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:11:12.854 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.854 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:12.854 11:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.854 11:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.854 11:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.854 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.854 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:12.854 11:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.854 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.113 00:11:13.370 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.370 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.370 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.628 { 00:11:13.628 "cntlid": 69, 00:11:13.628 "qid": 0, 00:11:13.628 "state": "enabled", 00:11:13.628 "thread": "nvmf_tgt_poll_group_000", 00:11:13.628 "listen_address": { 00:11:13.628 "trtype": "TCP", 00:11:13.628 "adrfam": "IPv4", 00:11:13.628 "traddr": "10.0.0.2", 00:11:13.628 "trsvcid": "4420" 00:11:13.628 }, 00:11:13.628 "peer_address": { 00:11:13.628 "trtype": "TCP", 00:11:13.628 "adrfam": "IPv4", 00:11:13.628 "traddr": "10.0.0.1", 00:11:13.628 "trsvcid": "58166" 00:11:13.628 }, 00:11:13.628 "auth": { 00:11:13.628 "state": "completed", 00:11:13.628 "digest": "sha384", 00:11:13.628 "dhgroup": "ffdhe3072" 00:11:13.628 } 00:11:13.628 } 00:11:13.628 ]' 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:13.628 11:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.628 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.628 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.628 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.887 11:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:11:14.821 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.821 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:14.821 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.821 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.821 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.821 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.821 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:14.821 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:15.080 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:15.336 00:11:15.336 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.336 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.336 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.594 { 00:11:15.594 "cntlid": 71, 00:11:15.594 "qid": 0, 00:11:15.594 "state": "enabled", 00:11:15.594 "thread": "nvmf_tgt_poll_group_000", 00:11:15.594 "listen_address": { 00:11:15.594 "trtype": "TCP", 00:11:15.594 "adrfam": "IPv4", 00:11:15.594 "traddr": "10.0.0.2", 00:11:15.594 "trsvcid": "4420" 00:11:15.594 }, 00:11:15.594 "peer_address": { 00:11:15.594 "trtype": "TCP", 00:11:15.594 "adrfam": "IPv4", 00:11:15.594 "traddr": "10.0.0.1", 00:11:15.594 "trsvcid": "40708" 00:11:15.594 }, 00:11:15.594 "auth": { 00:11:15.594 "state": "completed", 00:11:15.594 "digest": "sha384", 00:11:15.594 "dhgroup": "ffdhe3072" 00:11:15.594 } 00:11:15.594 } 00:11:15.594 ]' 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:15.594 11:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.852 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.852 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.852 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.852 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:11:16.791 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.791 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:16.791 11:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.791 11:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.791 11:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.791 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:16.791 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.791 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:16.791 11:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.061 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.319 00:11:17.319 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.319 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.319 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.577 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.577 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.577 11:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.577 11:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.577 11:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.577 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.577 { 00:11:17.577 "cntlid": 73, 00:11:17.577 "qid": 0, 00:11:17.577 "state": "enabled", 00:11:17.577 "thread": "nvmf_tgt_poll_group_000", 00:11:17.577 "listen_address": { 00:11:17.577 "trtype": "TCP", 00:11:17.577 "adrfam": "IPv4", 00:11:17.577 "traddr": "10.0.0.2", 00:11:17.577 "trsvcid": "4420" 00:11:17.577 }, 00:11:17.577 "peer_address": { 00:11:17.577 "trtype": "TCP", 00:11:17.577 "adrfam": "IPv4", 00:11:17.577 "traddr": "10.0.0.1", 00:11:17.577 "trsvcid": "40728" 00:11:17.577 }, 00:11:17.577 "auth": { 00:11:17.577 "state": "completed", 00:11:17.577 "digest": "sha384", 00:11:17.577 "dhgroup": "ffdhe4096" 00:11:17.577 } 00:11:17.577 } 00:11:17.577 ]' 00:11:17.577 11:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.834 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.834 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.834 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:17.834 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.834 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.834 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.834 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.111 11:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:11:18.678 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.678 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:18.678 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.678 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.678 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.678 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.678 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:18.678 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.937 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.505 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.505 { 00:11:19.505 "cntlid": 75, 00:11:19.505 "qid": 0, 00:11:19.505 "state": "enabled", 00:11:19.505 "thread": "nvmf_tgt_poll_group_000", 00:11:19.505 "listen_address": { 00:11:19.505 "trtype": "TCP", 00:11:19.505 "adrfam": "IPv4", 00:11:19.505 "traddr": "10.0.0.2", 00:11:19.505 "trsvcid": "4420" 00:11:19.505 }, 00:11:19.505 "peer_address": { 00:11:19.505 "trtype": "TCP", 00:11:19.505 "adrfam": "IPv4", 00:11:19.505 "traddr": "10.0.0.1", 00:11:19.505 "trsvcid": "40750" 00:11:19.505 }, 00:11:19.505 "auth": { 00:11:19.505 "state": "completed", 00:11:19.505 "digest": "sha384", 00:11:19.505 "dhgroup": "ffdhe4096" 00:11:19.505 } 00:11:19.505 } 00:11:19.505 ]' 00:11:19.505 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.763 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.763 11:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.763 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:19.763 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.763 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.763 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.763 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.021 11:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:11:20.955 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.955 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:20.955 11:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.955 11:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.955 11:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.955 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.955 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:20.955 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.956 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.522 00:11:21.522 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.522 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.522 11:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.780 { 00:11:21.780 "cntlid": 77, 00:11:21.780 "qid": 0, 00:11:21.780 "state": "enabled", 00:11:21.780 "thread": "nvmf_tgt_poll_group_000", 00:11:21.780 "listen_address": { 00:11:21.780 "trtype": "TCP", 00:11:21.780 "adrfam": "IPv4", 00:11:21.780 "traddr": "10.0.0.2", 00:11:21.780 "trsvcid": "4420" 00:11:21.780 }, 00:11:21.780 "peer_address": { 00:11:21.780 "trtype": "TCP", 00:11:21.780 "adrfam": "IPv4", 00:11:21.780 "traddr": "10.0.0.1", 00:11:21.780 "trsvcid": "40772" 00:11:21.780 }, 00:11:21.780 "auth": { 00:11:21.780 "state": "completed", 00:11:21.780 "digest": "sha384", 00:11:21.780 "dhgroup": "ffdhe4096" 00:11:21.780 } 00:11:21.780 } 00:11:21.780 ]' 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.780 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.037 11:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:22.970 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.534 00:11:23.534 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.534 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.534 11:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.791 { 00:11:23.791 "cntlid": 79, 00:11:23.791 "qid": 0, 00:11:23.791 "state": "enabled", 00:11:23.791 "thread": "nvmf_tgt_poll_group_000", 00:11:23.791 "listen_address": { 00:11:23.791 "trtype": "TCP", 00:11:23.791 "adrfam": "IPv4", 00:11:23.791 "traddr": "10.0.0.2", 00:11:23.791 "trsvcid": "4420" 00:11:23.791 }, 00:11:23.791 "peer_address": { 00:11:23.791 "trtype": "TCP", 00:11:23.791 "adrfam": "IPv4", 00:11:23.791 "traddr": "10.0.0.1", 00:11:23.791 "trsvcid": "40786" 00:11:23.791 }, 00:11:23.791 "auth": { 00:11:23.791 "state": "completed", 00:11:23.791 "digest": "sha384", 00:11:23.791 "dhgroup": "ffdhe4096" 00:11:23.791 } 00:11:23.791 } 00:11:23.791 ]' 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.791 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.355 11:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:11:24.981 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.981 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:24.981 11:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.981 11:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.981 11:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.981 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:24.981 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.981 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:24.981 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.238 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.803 00:11:25.803 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.803 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.803 11:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.061 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.061 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.061 11:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.061 11:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.061 11:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.061 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.061 { 00:11:26.061 "cntlid": 81, 00:11:26.061 "qid": 0, 00:11:26.061 "state": "enabled", 00:11:26.061 "thread": "nvmf_tgt_poll_group_000", 00:11:26.061 "listen_address": { 00:11:26.061 "trtype": "TCP", 00:11:26.061 "adrfam": "IPv4", 00:11:26.061 "traddr": "10.0.0.2", 00:11:26.061 "trsvcid": "4420" 00:11:26.061 }, 00:11:26.061 "peer_address": { 00:11:26.061 "trtype": "TCP", 00:11:26.061 "adrfam": "IPv4", 00:11:26.061 "traddr": "10.0.0.1", 00:11:26.061 "trsvcid": "44210" 00:11:26.061 }, 00:11:26.061 "auth": { 00:11:26.061 "state": "completed", 00:11:26.061 "digest": "sha384", 00:11:26.062 "dhgroup": "ffdhe6144" 00:11:26.062 } 00:11:26.062 } 00:11:26.062 ]' 00:11:26.062 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.062 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.062 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.062 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:26.062 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.062 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.062 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.062 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.319 11:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:11:27.251 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.251 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:27.251 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.251 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.251 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.251 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.251 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:27.251 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.509 11:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.073 00:11:28.073 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.073 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.073 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.333 { 00:11:28.333 "cntlid": 83, 00:11:28.333 "qid": 0, 00:11:28.333 "state": "enabled", 00:11:28.333 "thread": "nvmf_tgt_poll_group_000", 00:11:28.333 "listen_address": { 00:11:28.333 "trtype": "TCP", 00:11:28.333 "adrfam": "IPv4", 00:11:28.333 "traddr": "10.0.0.2", 00:11:28.333 "trsvcid": "4420" 00:11:28.333 }, 00:11:28.333 "peer_address": { 00:11:28.333 "trtype": "TCP", 00:11:28.333 "adrfam": "IPv4", 00:11:28.333 "traddr": "10.0.0.1", 00:11:28.333 "trsvcid": "44236" 00:11:28.333 }, 00:11:28.333 "auth": { 00:11:28.333 "state": "completed", 00:11:28.333 "digest": "sha384", 00:11:28.333 "dhgroup": "ffdhe6144" 00:11:28.333 } 00:11:28.333 } 00:11:28.333 ]' 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.333 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.596 11:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.530 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.788 11:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.788 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.788 11:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.046 00:11:30.046 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.046 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.046 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.305 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.305 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.305 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.305 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.305 11:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.305 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.305 { 00:11:30.305 "cntlid": 85, 00:11:30.305 "qid": 0, 00:11:30.305 "state": "enabled", 00:11:30.305 "thread": "nvmf_tgt_poll_group_000", 00:11:30.305 "listen_address": { 00:11:30.305 "trtype": "TCP", 00:11:30.305 "adrfam": "IPv4", 00:11:30.305 "traddr": "10.0.0.2", 00:11:30.305 "trsvcid": "4420" 00:11:30.305 }, 00:11:30.305 "peer_address": { 00:11:30.305 "trtype": "TCP", 00:11:30.305 "adrfam": "IPv4", 00:11:30.305 "traddr": "10.0.0.1", 00:11:30.305 "trsvcid": "44250" 00:11:30.305 }, 00:11:30.305 "auth": { 00:11:30.305 "state": "completed", 00:11:30.305 "digest": "sha384", 00:11:30.305 "dhgroup": "ffdhe6144" 00:11:30.305 } 00:11:30.305 } 00:11:30.305 ]' 00:11:30.305 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.563 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.563 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.563 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:30.563 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.563 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.563 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.563 11:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.821 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:11:31.756 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.756 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:31.756 11:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.756 11:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.756 11:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.756 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.756 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:31.756 11:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.014 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.272 00:11:32.272 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.272 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.272 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.594 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.594 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.594 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.594 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.594 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.594 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.594 { 00:11:32.594 "cntlid": 87, 00:11:32.594 "qid": 0, 00:11:32.594 "state": "enabled", 00:11:32.594 "thread": "nvmf_tgt_poll_group_000", 00:11:32.594 "listen_address": { 00:11:32.594 "trtype": "TCP", 00:11:32.594 "adrfam": "IPv4", 00:11:32.594 "traddr": "10.0.0.2", 00:11:32.594 "trsvcid": "4420" 00:11:32.594 }, 00:11:32.594 "peer_address": { 00:11:32.594 "trtype": "TCP", 00:11:32.594 "adrfam": "IPv4", 00:11:32.594 "traddr": "10.0.0.1", 00:11:32.594 "trsvcid": "44278" 00:11:32.594 }, 00:11:32.594 "auth": { 00:11:32.594 "state": "completed", 00:11:32.594 "digest": "sha384", 00:11:32.594 "dhgroup": "ffdhe6144" 00:11:32.594 } 00:11:32.594 } 00:11:32.594 ]' 00:11:32.594 11:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.594 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.594 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.851 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:32.851 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.851 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.851 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.851 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.108 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:11:33.674 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.674 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:33.674 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.674 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.674 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.674 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.674 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.674 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:33.674 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.932 11:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.867 00:11:34.867 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.867 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.867 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.867 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.867 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.867 11:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.867 11:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.868 11:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.868 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.868 { 00:11:34.868 "cntlid": 89, 00:11:34.868 "qid": 0, 00:11:34.868 "state": "enabled", 00:11:34.868 "thread": "nvmf_tgt_poll_group_000", 00:11:34.868 "listen_address": { 00:11:34.868 "trtype": "TCP", 00:11:34.868 "adrfam": "IPv4", 00:11:34.868 "traddr": "10.0.0.2", 00:11:34.868 "trsvcid": "4420" 00:11:34.868 }, 00:11:34.868 "peer_address": { 00:11:34.868 "trtype": "TCP", 00:11:34.868 "adrfam": "IPv4", 00:11:34.868 "traddr": "10.0.0.1", 00:11:34.868 "trsvcid": "50658" 00:11:34.868 }, 00:11:34.868 "auth": { 00:11:34.868 "state": "completed", 00:11:34.868 "digest": "sha384", 00:11:34.868 "dhgroup": "ffdhe8192" 00:11:34.868 } 00:11:34.868 } 00:11:34.868 ]' 00:11:34.868 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.125 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.125 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.125 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:35.125 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.125 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.125 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.125 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.383 11:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:11:35.948 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.949 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:35.949 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.949 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.949 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.949 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.949 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:35.949 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.207 11:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.170 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.170 { 00:11:37.170 "cntlid": 91, 00:11:37.170 "qid": 0, 00:11:37.170 "state": "enabled", 00:11:37.170 "thread": "nvmf_tgt_poll_group_000", 00:11:37.170 "listen_address": { 00:11:37.170 "trtype": "TCP", 00:11:37.170 "adrfam": "IPv4", 00:11:37.170 "traddr": "10.0.0.2", 00:11:37.170 "trsvcid": "4420" 00:11:37.170 }, 00:11:37.170 "peer_address": { 00:11:37.170 "trtype": "TCP", 00:11:37.170 "adrfam": "IPv4", 00:11:37.170 "traddr": "10.0.0.1", 00:11:37.170 "trsvcid": "50680" 00:11:37.170 }, 00:11:37.170 "auth": { 00:11:37.170 "state": "completed", 00:11:37.170 "digest": "sha384", 00:11:37.170 "dhgroup": "ffdhe8192" 00:11:37.170 } 00:11:37.170 } 00:11:37.170 ]' 00:11:37.170 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.430 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.430 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.430 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:37.430 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.430 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.430 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.430 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.689 11:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:11:38.623 11:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.623 11:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:38.623 11:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.623 11:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.623 11:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.623 11:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.623 11:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:38.623 11:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:38.623 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:38.623 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.623 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.623 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:38.623 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:38.624 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.624 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.624 11:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.624 11:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.624 11:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.624 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.624 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.557 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.557 { 00:11:39.557 "cntlid": 93, 00:11:39.557 "qid": 0, 00:11:39.557 "state": "enabled", 00:11:39.557 "thread": "nvmf_tgt_poll_group_000", 00:11:39.557 "listen_address": { 00:11:39.557 "trtype": "TCP", 00:11:39.557 "adrfam": "IPv4", 00:11:39.557 "traddr": "10.0.0.2", 00:11:39.557 "trsvcid": "4420" 00:11:39.557 }, 00:11:39.557 "peer_address": { 00:11:39.557 "trtype": "TCP", 00:11:39.557 "adrfam": "IPv4", 00:11:39.557 "traddr": "10.0.0.1", 00:11:39.557 "trsvcid": "50700" 00:11:39.557 }, 00:11:39.557 "auth": { 00:11:39.557 "state": "completed", 00:11:39.557 "digest": "sha384", 00:11:39.557 "dhgroup": "ffdhe8192" 00:11:39.557 } 00:11:39.557 } 00:11:39.557 ]' 00:11:39.557 11:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.557 11:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.557 11:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.814 11:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:39.814 11:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.814 11:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.814 11:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.814 11:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.072 11:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:11:40.635 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.635 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:40.636 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.636 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.636 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.636 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.636 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:40.636 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.894 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:41.828 00:11:41.828 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.828 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.828 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.828 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.828 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.828 11:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.828 11:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.828 11:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.828 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.828 { 00:11:41.828 "cntlid": 95, 00:11:41.828 "qid": 0, 00:11:41.828 "state": "enabled", 00:11:41.828 "thread": "nvmf_tgt_poll_group_000", 00:11:41.828 "listen_address": { 00:11:41.828 "trtype": "TCP", 00:11:41.828 "adrfam": "IPv4", 00:11:41.828 "traddr": "10.0.0.2", 00:11:41.828 "trsvcid": "4420" 00:11:41.828 }, 00:11:41.828 "peer_address": { 00:11:41.828 "trtype": "TCP", 00:11:41.828 "adrfam": "IPv4", 00:11:41.828 "traddr": "10.0.0.1", 00:11:41.828 "trsvcid": "50708" 00:11:41.828 }, 00:11:41.828 "auth": { 00:11:41.828 "state": "completed", 00:11:41.828 "digest": "sha384", 00:11:41.828 "dhgroup": "ffdhe8192" 00:11:41.828 } 00:11:41.828 } 00:11:41.828 ]' 00:11:41.828 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.087 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.087 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.087 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:42.087 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.087 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.087 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.087 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.345 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:11:42.912 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.170 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:43.170 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.170 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.170 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.171 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:43.171 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.171 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.171 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:43.171 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.429 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.687 00:11:43.687 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.687 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.687 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.945 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.946 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.946 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.946 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.946 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.946 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.946 { 00:11:43.946 "cntlid": 97, 00:11:43.946 "qid": 0, 00:11:43.946 "state": "enabled", 00:11:43.946 "thread": "nvmf_tgt_poll_group_000", 00:11:43.946 "listen_address": { 00:11:43.946 "trtype": "TCP", 00:11:43.946 "adrfam": "IPv4", 00:11:43.946 "traddr": "10.0.0.2", 00:11:43.946 "trsvcid": "4420" 00:11:43.946 }, 00:11:43.946 "peer_address": { 00:11:43.946 "trtype": "TCP", 00:11:43.946 "adrfam": "IPv4", 00:11:43.946 "traddr": "10.0.0.1", 00:11:43.946 "trsvcid": "50716" 00:11:43.946 }, 00:11:43.946 "auth": { 00:11:43.946 "state": "completed", 00:11:43.946 "digest": "sha512", 00:11:43.946 "dhgroup": "null" 00:11:43.946 } 00:11:43.946 } 00:11:43.946 ]' 00:11:43.946 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.946 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:43.946 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.204 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:44.204 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.204 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.204 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.204 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.463 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:11:45.046 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.046 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:45.046 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.046 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.046 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.046 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.046 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:45.046 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.304 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.563 00:11:45.563 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.563 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.563 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.823 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.823 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.823 11:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.823 11:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.823 11:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.823 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.823 { 00:11:45.823 "cntlid": 99, 00:11:45.823 "qid": 0, 00:11:45.823 "state": "enabled", 00:11:45.824 "thread": "nvmf_tgt_poll_group_000", 00:11:45.824 "listen_address": { 00:11:45.824 "trtype": "TCP", 00:11:45.824 "adrfam": "IPv4", 00:11:45.824 "traddr": "10.0.0.2", 00:11:45.824 "trsvcid": "4420" 00:11:45.824 }, 00:11:45.824 "peer_address": { 00:11:45.824 "trtype": "TCP", 00:11:45.824 "adrfam": "IPv4", 00:11:45.824 "traddr": "10.0.0.1", 00:11:45.824 "trsvcid": "45916" 00:11:45.824 }, 00:11:45.824 "auth": { 00:11:45.824 "state": "completed", 00:11:45.824 "digest": "sha512", 00:11:45.824 "dhgroup": "null" 00:11:45.824 } 00:11:45.824 } 00:11:45.824 ]' 00:11:45.824 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.824 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.824 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:46.082 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:46.082 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:46.082 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.082 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.082 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.342 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:11:46.908 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.908 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:46.908 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.908 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.908 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.908 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.908 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:46.908 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.167 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.734 00:11:47.734 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.734 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.734 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.993 { 00:11:47.993 "cntlid": 101, 00:11:47.993 "qid": 0, 00:11:47.993 "state": "enabled", 00:11:47.993 "thread": "nvmf_tgt_poll_group_000", 00:11:47.993 "listen_address": { 00:11:47.993 "trtype": "TCP", 00:11:47.993 "adrfam": "IPv4", 00:11:47.993 "traddr": "10.0.0.2", 00:11:47.993 "trsvcid": "4420" 00:11:47.993 }, 00:11:47.993 "peer_address": { 00:11:47.993 "trtype": "TCP", 00:11:47.993 "adrfam": "IPv4", 00:11:47.993 "traddr": "10.0.0.1", 00:11:47.993 "trsvcid": "45942" 00:11:47.993 }, 00:11:47.993 "auth": { 00:11:47.993 "state": "completed", 00:11:47.993 "digest": "sha512", 00:11:47.993 "dhgroup": "null" 00:11:47.993 } 00:11:47.993 } 00:11:47.993 ]' 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.993 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.250 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.182 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.439 00:11:49.699 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.699 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.699 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.960 { 00:11:49.960 "cntlid": 103, 00:11:49.960 "qid": 0, 00:11:49.960 "state": "enabled", 00:11:49.960 "thread": "nvmf_tgt_poll_group_000", 00:11:49.960 "listen_address": { 00:11:49.960 "trtype": "TCP", 00:11:49.960 "adrfam": "IPv4", 00:11:49.960 "traddr": "10.0.0.2", 00:11:49.960 "trsvcid": "4420" 00:11:49.960 }, 00:11:49.960 "peer_address": { 00:11:49.960 "trtype": "TCP", 00:11:49.960 "adrfam": "IPv4", 00:11:49.960 "traddr": "10.0.0.1", 00:11:49.960 "trsvcid": "45972" 00:11:49.960 }, 00:11:49.960 "auth": { 00:11:49.960 "state": "completed", 00:11:49.960 "digest": "sha512", 00:11:49.960 "dhgroup": "null" 00:11:49.960 } 00:11:49.960 } 00:11:49.960 ]' 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.960 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.218 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.154 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.412 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.412 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.412 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.673 00:11:51.673 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.673 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.673 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.932 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.932 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.932 11:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.932 11:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.932 11:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.932 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.932 { 00:11:51.932 "cntlid": 105, 00:11:51.932 "qid": 0, 00:11:51.932 "state": "enabled", 00:11:51.932 "thread": "nvmf_tgt_poll_group_000", 00:11:51.932 "listen_address": { 00:11:51.932 "trtype": "TCP", 00:11:51.932 "adrfam": "IPv4", 00:11:51.932 "traddr": "10.0.0.2", 00:11:51.932 "trsvcid": "4420" 00:11:51.932 }, 00:11:51.932 "peer_address": { 00:11:51.932 "trtype": "TCP", 00:11:51.932 "adrfam": "IPv4", 00:11:51.932 "traddr": "10.0.0.1", 00:11:51.932 "trsvcid": "46002" 00:11:51.932 }, 00:11:51.932 "auth": { 00:11:51.932 "state": "completed", 00:11:51.932 "digest": "sha512", 00:11:51.932 "dhgroup": "ffdhe2048" 00:11:51.932 } 00:11:51.932 } 00:11:51.932 ]' 00:11:51.932 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.932 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.932 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.191 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:52.191 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.191 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.191 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.191 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.450 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:11:53.016 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.017 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:53.017 11:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.017 11:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.017 11:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.017 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.017 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:53.017 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.329 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.603 00:11:53.603 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.603 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.603 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.170 { 00:11:54.170 "cntlid": 107, 00:11:54.170 "qid": 0, 00:11:54.170 "state": "enabled", 00:11:54.170 "thread": "nvmf_tgt_poll_group_000", 00:11:54.170 "listen_address": { 00:11:54.170 "trtype": "TCP", 00:11:54.170 "adrfam": "IPv4", 00:11:54.170 "traddr": "10.0.0.2", 00:11:54.170 "trsvcid": "4420" 00:11:54.170 }, 00:11:54.170 "peer_address": { 00:11:54.170 "trtype": "TCP", 00:11:54.170 "adrfam": "IPv4", 00:11:54.170 "traddr": "10.0.0.1", 00:11:54.170 "trsvcid": "46034" 00:11:54.170 }, 00:11:54.170 "auth": { 00:11:54.170 "state": "completed", 00:11:54.170 "digest": "sha512", 00:11:54.170 "dhgroup": "ffdhe2048" 00:11:54.170 } 00:11:54.170 } 00:11:54.170 ]' 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.170 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.430 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:11:54.998 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.998 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:54.998 11:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.998 11:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.998 11:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.998 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.998 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:54.998 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.515 00:11:55.773 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.773 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.773 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.032 { 00:11:56.032 "cntlid": 109, 00:11:56.032 "qid": 0, 00:11:56.032 "state": "enabled", 00:11:56.032 "thread": "nvmf_tgt_poll_group_000", 00:11:56.032 "listen_address": { 00:11:56.032 "trtype": "TCP", 00:11:56.032 "adrfam": "IPv4", 00:11:56.032 "traddr": "10.0.0.2", 00:11:56.032 "trsvcid": "4420" 00:11:56.032 }, 00:11:56.032 "peer_address": { 00:11:56.032 "trtype": "TCP", 00:11:56.032 "adrfam": "IPv4", 00:11:56.032 "traddr": "10.0.0.1", 00:11:56.032 "trsvcid": "38592" 00:11:56.032 }, 00:11:56.032 "auth": { 00:11:56.032 "state": "completed", 00:11:56.032 "digest": "sha512", 00:11:56.032 "dhgroup": "ffdhe2048" 00:11:56.032 } 00:11:56.032 } 00:11:56.032 ]' 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.032 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.290 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:11:57.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:57.224 11:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.224 11:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.224 11:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:57.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:57.481 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:11:57.481 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.481 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:57.481 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:57.482 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:57.482 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.482 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:11:57.482 11:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.482 11:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.482 11:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.482 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.482 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.741 00:11:57.741 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.741 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.741 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.000 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.000 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.000 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.000 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.000 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.000 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.000 { 00:11:58.000 "cntlid": 111, 00:11:58.000 "qid": 0, 00:11:58.000 "state": "enabled", 00:11:58.000 "thread": "nvmf_tgt_poll_group_000", 00:11:58.000 "listen_address": { 00:11:58.000 "trtype": "TCP", 00:11:58.000 "adrfam": "IPv4", 00:11:58.000 "traddr": "10.0.0.2", 00:11:58.000 "trsvcid": "4420" 00:11:58.000 }, 00:11:58.000 "peer_address": { 00:11:58.001 "trtype": "TCP", 00:11:58.001 "adrfam": "IPv4", 00:11:58.001 "traddr": "10.0.0.1", 00:11:58.001 "trsvcid": "38618" 00:11:58.001 }, 00:11:58.001 "auth": { 00:11:58.001 "state": "completed", 00:11:58.001 "digest": "sha512", 00:11:58.001 "dhgroup": "ffdhe2048" 00:11:58.001 } 00:11:58.001 } 00:11:58.001 ]' 00:11:58.001 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.001 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.001 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.259 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:58.259 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.259 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.259 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.259 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.517 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:11:59.084 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.084 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:11:59.084 11:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.084 11:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.084 11:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.084 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:59.084 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.084 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:59.084 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.343 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.601 00:11:59.601 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.601 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.601 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.882 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.882 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.882 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.882 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.882 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.882 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.882 { 00:11:59.882 "cntlid": 113, 00:11:59.882 "qid": 0, 00:11:59.882 "state": "enabled", 00:11:59.882 "thread": "nvmf_tgt_poll_group_000", 00:11:59.882 "listen_address": { 00:11:59.882 "trtype": "TCP", 00:11:59.882 "adrfam": "IPv4", 00:11:59.882 "traddr": "10.0.0.2", 00:11:59.882 "trsvcid": "4420" 00:11:59.882 }, 00:11:59.882 "peer_address": { 00:11:59.882 "trtype": "TCP", 00:11:59.882 "adrfam": "IPv4", 00:11:59.882 "traddr": "10.0.0.1", 00:11:59.882 "trsvcid": "38644" 00:11:59.882 }, 00:11:59.882 "auth": { 00:11:59.882 "state": "completed", 00:11:59.882 "digest": "sha512", 00:11:59.883 "dhgroup": "ffdhe3072" 00:11:59.883 } 00:11:59.883 } 00:11:59.883 ]' 00:11:59.883 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.883 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.883 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.139 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.139 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.139 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.139 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.139 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.397 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:12:00.962 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.962 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:00.962 11:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.962 11:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.962 11:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.962 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.962 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:00.962 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.220 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.787 00:12:01.787 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.787 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.787 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.787 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.787 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.787 11:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.787 11:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.046 { 00:12:02.046 "cntlid": 115, 00:12:02.046 "qid": 0, 00:12:02.046 "state": "enabled", 00:12:02.046 "thread": "nvmf_tgt_poll_group_000", 00:12:02.046 "listen_address": { 00:12:02.046 "trtype": "TCP", 00:12:02.046 "adrfam": "IPv4", 00:12:02.046 "traddr": "10.0.0.2", 00:12:02.046 "trsvcid": "4420" 00:12:02.046 }, 00:12:02.046 "peer_address": { 00:12:02.046 "trtype": "TCP", 00:12:02.046 "adrfam": "IPv4", 00:12:02.046 "traddr": "10.0.0.1", 00:12:02.046 "trsvcid": "38670" 00:12:02.046 }, 00:12:02.046 "auth": { 00:12:02.046 "state": "completed", 00:12:02.046 "digest": "sha512", 00:12:02.046 "dhgroup": "ffdhe3072" 00:12:02.046 } 00:12:02.046 } 00:12:02.046 ]' 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.046 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.304 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.238 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.805 00:12:03.805 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.805 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.805 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.064 { 00:12:04.064 "cntlid": 117, 00:12:04.064 "qid": 0, 00:12:04.064 "state": "enabled", 00:12:04.064 "thread": "nvmf_tgt_poll_group_000", 00:12:04.064 "listen_address": { 00:12:04.064 "trtype": "TCP", 00:12:04.064 "adrfam": "IPv4", 00:12:04.064 "traddr": "10.0.0.2", 00:12:04.064 "trsvcid": "4420" 00:12:04.064 }, 00:12:04.064 "peer_address": { 00:12:04.064 "trtype": "TCP", 00:12:04.064 "adrfam": "IPv4", 00:12:04.064 "traddr": "10.0.0.1", 00:12:04.064 "trsvcid": "38694" 00:12:04.064 }, 00:12:04.064 "auth": { 00:12:04.064 "state": "completed", 00:12:04.064 "digest": "sha512", 00:12:04.064 "dhgroup": "ffdhe3072" 00:12:04.064 } 00:12:04.064 } 00:12:04.064 ]' 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.064 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.323 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:12:05.259 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.259 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:05.259 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.259 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.259 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.259 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.259 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:05.259 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.516 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.774 00:12:05.774 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.774 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.774 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.031 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.031 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.031 11:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.031 11:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.031 11:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.031 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.031 { 00:12:06.031 "cntlid": 119, 00:12:06.031 "qid": 0, 00:12:06.031 "state": "enabled", 00:12:06.031 "thread": "nvmf_tgt_poll_group_000", 00:12:06.031 "listen_address": { 00:12:06.031 "trtype": "TCP", 00:12:06.031 "adrfam": "IPv4", 00:12:06.031 "traddr": "10.0.0.2", 00:12:06.031 "trsvcid": "4420" 00:12:06.031 }, 00:12:06.031 "peer_address": { 00:12:06.031 "trtype": "TCP", 00:12:06.031 "adrfam": "IPv4", 00:12:06.031 "traddr": "10.0.0.1", 00:12:06.031 "trsvcid": "55830" 00:12:06.031 }, 00:12:06.031 "auth": { 00:12:06.031 "state": "completed", 00:12:06.031 "digest": "sha512", 00:12:06.031 "dhgroup": "ffdhe3072" 00:12:06.031 } 00:12:06.031 } 00:12:06.031 ]' 00:12:06.031 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.031 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.031 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.287 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:06.287 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.287 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.287 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.287 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.545 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:12:07.110 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.110 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:07.110 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.110 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.368 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.368 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.368 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.368 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:07.368 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.627 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.886 00:12:07.886 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.886 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.886 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.144 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.144 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.145 11:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.145 11:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.145 11:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.145 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.145 { 00:12:08.145 "cntlid": 121, 00:12:08.145 "qid": 0, 00:12:08.145 "state": "enabled", 00:12:08.145 "thread": "nvmf_tgt_poll_group_000", 00:12:08.145 "listen_address": { 00:12:08.145 "trtype": "TCP", 00:12:08.145 "adrfam": "IPv4", 00:12:08.145 "traddr": "10.0.0.2", 00:12:08.145 "trsvcid": "4420" 00:12:08.145 }, 00:12:08.145 "peer_address": { 00:12:08.145 "trtype": "TCP", 00:12:08.145 "adrfam": "IPv4", 00:12:08.145 "traddr": "10.0.0.1", 00:12:08.145 "trsvcid": "55862" 00:12:08.145 }, 00:12:08.145 "auth": { 00:12:08.145 "state": "completed", 00:12:08.145 "digest": "sha512", 00:12:08.145 "dhgroup": "ffdhe4096" 00:12:08.145 } 00:12:08.145 } 00:12:08.145 ]' 00:12:08.145 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.145 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.145 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.145 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.145 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.403 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.403 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.403 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.661 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:12:09.228 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.229 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:09.229 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.229 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.229 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.229 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.229 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:09.229 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.487 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.757 00:12:09.757 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.757 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.757 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.014 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.014 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.014 11:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.014 11:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.014 11:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.014 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.014 { 00:12:10.014 "cntlid": 123, 00:12:10.014 "qid": 0, 00:12:10.014 "state": "enabled", 00:12:10.014 "thread": "nvmf_tgt_poll_group_000", 00:12:10.014 "listen_address": { 00:12:10.014 "trtype": "TCP", 00:12:10.014 "adrfam": "IPv4", 00:12:10.014 "traddr": "10.0.0.2", 00:12:10.014 "trsvcid": "4420" 00:12:10.014 }, 00:12:10.014 "peer_address": { 00:12:10.014 "trtype": "TCP", 00:12:10.014 "adrfam": "IPv4", 00:12:10.014 "traddr": "10.0.0.1", 00:12:10.014 "trsvcid": "55890" 00:12:10.014 }, 00:12:10.014 "auth": { 00:12:10.014 "state": "completed", 00:12:10.014 "digest": "sha512", 00:12:10.014 "dhgroup": "ffdhe4096" 00:12:10.014 } 00:12:10.014 } 00:12:10.014 ]' 00:12:10.014 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.271 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.271 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.271 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:10.271 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.271 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.271 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.271 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.530 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:12:11.205 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.205 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:11.205 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.205 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.205 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.205 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.205 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:11.205 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.769 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.027 00:12:12.027 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.027 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.027 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.284 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.284 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.284 11:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.284 11:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.285 11:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.285 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.285 { 00:12:12.285 "cntlid": 125, 00:12:12.285 "qid": 0, 00:12:12.285 "state": "enabled", 00:12:12.285 "thread": "nvmf_tgt_poll_group_000", 00:12:12.285 "listen_address": { 00:12:12.285 "trtype": "TCP", 00:12:12.285 "adrfam": "IPv4", 00:12:12.285 "traddr": "10.0.0.2", 00:12:12.285 "trsvcid": "4420" 00:12:12.285 }, 00:12:12.285 "peer_address": { 00:12:12.285 "trtype": "TCP", 00:12:12.285 "adrfam": "IPv4", 00:12:12.285 "traddr": "10.0.0.1", 00:12:12.285 "trsvcid": "55906" 00:12:12.285 }, 00:12:12.285 "auth": { 00:12:12.285 "state": "completed", 00:12:12.285 "digest": "sha512", 00:12:12.285 "dhgroup": "ffdhe4096" 00:12:12.285 } 00:12:12.285 } 00:12:12.285 ]' 00:12:12.285 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.285 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.285 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.543 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.543 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.543 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.543 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.543 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.801 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:12:13.366 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.624 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:13.624 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.624 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.624 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.624 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.624 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.624 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:13.882 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.141 00:12:14.141 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.141 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.141 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.400 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.400 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.400 11:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.400 11:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.400 11:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.400 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.400 { 00:12:14.400 "cntlid": 127, 00:12:14.400 "qid": 0, 00:12:14.400 "state": "enabled", 00:12:14.400 "thread": "nvmf_tgt_poll_group_000", 00:12:14.400 "listen_address": { 00:12:14.400 "trtype": "TCP", 00:12:14.400 "adrfam": "IPv4", 00:12:14.400 "traddr": "10.0.0.2", 00:12:14.400 "trsvcid": "4420" 00:12:14.400 }, 00:12:14.400 "peer_address": { 00:12:14.400 "trtype": "TCP", 00:12:14.400 "adrfam": "IPv4", 00:12:14.400 "traddr": "10.0.0.1", 00:12:14.400 "trsvcid": "59514" 00:12:14.400 }, 00:12:14.400 "auth": { 00:12:14.400 "state": "completed", 00:12:14.400 "digest": "sha512", 00:12:14.400 "dhgroup": "ffdhe4096" 00:12:14.400 } 00:12:14.400 } 00:12:14.400 ]' 00:12:14.400 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.400 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.659 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.659 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.659 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.659 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.659 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.659 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.917 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:12:15.483 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.483 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:15.483 11:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.483 11:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.483 11:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.483 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:15.483 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.483 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.483 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.741 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.306 00:12:16.306 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.306 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.306 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.564 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.564 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.564 11:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.564 11:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.564 11:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.564 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.564 { 00:12:16.564 "cntlid": 129, 00:12:16.564 "qid": 0, 00:12:16.564 "state": "enabled", 00:12:16.564 "thread": "nvmf_tgt_poll_group_000", 00:12:16.564 "listen_address": { 00:12:16.564 "trtype": "TCP", 00:12:16.564 "adrfam": "IPv4", 00:12:16.564 "traddr": "10.0.0.2", 00:12:16.564 "trsvcid": "4420" 00:12:16.564 }, 00:12:16.564 "peer_address": { 00:12:16.564 "trtype": "TCP", 00:12:16.564 "adrfam": "IPv4", 00:12:16.564 "traddr": "10.0.0.1", 00:12:16.564 "trsvcid": "59536" 00:12:16.564 }, 00:12:16.564 "auth": { 00:12:16.564 "state": "completed", 00:12:16.564 "digest": "sha512", 00:12:16.564 "dhgroup": "ffdhe6144" 00:12:16.564 } 00:12:16.564 } 00:12:16.564 ]' 00:12:16.564 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.822 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.822 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.822 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.822 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.822 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.822 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.822 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.080 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:12:17.646 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.646 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:17.646 11:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.646 11:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.646 11:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.646 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.646 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:17.646 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.211 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.469 00:12:18.469 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.469 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.469 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.728 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.728 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.728 11:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.728 11:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.728 11:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.728 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.728 { 00:12:18.728 "cntlid": 131, 00:12:18.728 "qid": 0, 00:12:18.728 "state": "enabled", 00:12:18.728 "thread": "nvmf_tgt_poll_group_000", 00:12:18.728 "listen_address": { 00:12:18.728 "trtype": "TCP", 00:12:18.728 "adrfam": "IPv4", 00:12:18.728 "traddr": "10.0.0.2", 00:12:18.728 "trsvcid": "4420" 00:12:18.728 }, 00:12:18.728 "peer_address": { 00:12:18.728 "trtype": "TCP", 00:12:18.728 "adrfam": "IPv4", 00:12:18.728 "traddr": "10.0.0.1", 00:12:18.728 "trsvcid": "59560" 00:12:18.728 }, 00:12:18.728 "auth": { 00:12:18.728 "state": "completed", 00:12:18.728 "digest": "sha512", 00:12:18.728 "dhgroup": "ffdhe6144" 00:12:18.728 } 00:12:18.728 } 00:12:18.728 ]' 00:12:18.728 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.986 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.986 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.986 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:18.986 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.986 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.986 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.986 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.243 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.175 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.740 00:12:20.740 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.740 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.740 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.998 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.998 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.998 11:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.998 11:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.998 11:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.998 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.998 { 00:12:20.998 "cntlid": 133, 00:12:20.998 "qid": 0, 00:12:20.998 "state": "enabled", 00:12:20.998 "thread": "nvmf_tgt_poll_group_000", 00:12:20.998 "listen_address": { 00:12:20.998 "trtype": "TCP", 00:12:20.998 "adrfam": "IPv4", 00:12:20.998 "traddr": "10.0.0.2", 00:12:20.998 "trsvcid": "4420" 00:12:20.998 }, 00:12:20.998 "peer_address": { 00:12:20.998 "trtype": "TCP", 00:12:20.998 "adrfam": "IPv4", 00:12:20.998 "traddr": "10.0.0.1", 00:12:20.998 "trsvcid": "59574" 00:12:20.998 }, 00:12:20.998 "auth": { 00:12:20.998 "state": "completed", 00:12:20.998 "digest": "sha512", 00:12:20.998 "dhgroup": "ffdhe6144" 00:12:20.998 } 00:12:20.998 } 00:12:20.998 ]' 00:12:20.998 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.998 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.998 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.256 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.256 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.256 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.256 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.256 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.513 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:12:22.449 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.450 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.017 00:12:23.017 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.017 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.017 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.277 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.277 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.277 11:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.277 11:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.277 11:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.277 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.277 { 00:12:23.277 "cntlid": 135, 00:12:23.277 "qid": 0, 00:12:23.277 "state": "enabled", 00:12:23.277 "thread": "nvmf_tgt_poll_group_000", 00:12:23.277 "listen_address": { 00:12:23.277 "trtype": "TCP", 00:12:23.277 "adrfam": "IPv4", 00:12:23.277 "traddr": "10.0.0.2", 00:12:23.277 "trsvcid": "4420" 00:12:23.277 }, 00:12:23.277 "peer_address": { 00:12:23.277 "trtype": "TCP", 00:12:23.277 "adrfam": "IPv4", 00:12:23.277 "traddr": "10.0.0.1", 00:12:23.277 "trsvcid": "59602" 00:12:23.277 }, 00:12:23.277 "auth": { 00:12:23.277 "state": "completed", 00:12:23.277 "digest": "sha512", 00:12:23.277 "dhgroup": "ffdhe6144" 00:12:23.277 } 00:12:23.277 } 00:12:23.277 ]' 00:12:23.277 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.277 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.277 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.536 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:23.536 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.537 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.537 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.537 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.796 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:12:24.365 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.365 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:24.365 11:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.365 11:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.624 11:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.624 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.624 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.624 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:24.624 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:24.883 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:24.883 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.883 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:24.884 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:24.884 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:24.884 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.884 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.884 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.884 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.884 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.884 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.884 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.452 00:12:25.452 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.452 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.452 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.710 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.711 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.711 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.711 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.711 { 00:12:25.711 "cntlid": 137, 00:12:25.711 "qid": 0, 00:12:25.711 "state": "enabled", 00:12:25.711 "thread": "nvmf_tgt_poll_group_000", 00:12:25.711 "listen_address": { 00:12:25.711 "trtype": "TCP", 00:12:25.711 "adrfam": "IPv4", 00:12:25.711 "traddr": "10.0.0.2", 00:12:25.711 "trsvcid": "4420" 00:12:25.711 }, 00:12:25.711 "peer_address": { 00:12:25.711 "trtype": "TCP", 00:12:25.711 "adrfam": "IPv4", 00:12:25.711 "traddr": "10.0.0.1", 00:12:25.711 "trsvcid": "35986" 00:12:25.711 }, 00:12:25.711 "auth": { 00:12:25.711 "state": "completed", 00:12:25.711 "digest": "sha512", 00:12:25.711 "dhgroup": "ffdhe8192" 00:12:25.711 } 00:12:25.711 } 00:12:25.711 ]' 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.711 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.969 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:12:26.904 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.904 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:26.904 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.904 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.904 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.904 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.904 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:26.904 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.163 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.730 00:12:27.730 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.730 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.730 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.988 { 00:12:27.988 "cntlid": 139, 00:12:27.988 "qid": 0, 00:12:27.988 "state": "enabled", 00:12:27.988 "thread": "nvmf_tgt_poll_group_000", 00:12:27.988 "listen_address": { 00:12:27.988 "trtype": "TCP", 00:12:27.988 "adrfam": "IPv4", 00:12:27.988 "traddr": "10.0.0.2", 00:12:27.988 "trsvcid": "4420" 00:12:27.988 }, 00:12:27.988 "peer_address": { 00:12:27.988 "trtype": "TCP", 00:12:27.988 "adrfam": "IPv4", 00:12:27.988 "traddr": "10.0.0.1", 00:12:27.988 "trsvcid": "36016" 00:12:27.988 }, 00:12:27.988 "auth": { 00:12:27.988 "state": "completed", 00:12:27.988 "digest": "sha512", 00:12:27.988 "dhgroup": "ffdhe8192" 00:12:27.988 } 00:12:27.988 } 00:12:27.988 ]' 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:27.988 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.246 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.246 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.246 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.504 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:01:MzhhOWZmYjU5ZjZiMjJkNGVhY2ZiMmE3MzlkMDE3ZDFRnesF: --dhchap-ctrl-secret DHHC-1:02:NTk5NjNiZDliZjQ1OTg1MDJjYjU2ZTRjMThkMDUxYTAxODdlYTcyYTBiNDg3NTk1HyejSg==: 00:12:29.070 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.070 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:29.070 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.070 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.070 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.070 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.070 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:29.070 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.328 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.894 00:12:30.151 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.151 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.151 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.409 { 00:12:30.409 "cntlid": 141, 00:12:30.409 "qid": 0, 00:12:30.409 "state": "enabled", 00:12:30.409 "thread": "nvmf_tgt_poll_group_000", 00:12:30.409 "listen_address": { 00:12:30.409 "trtype": "TCP", 00:12:30.409 "adrfam": "IPv4", 00:12:30.409 "traddr": "10.0.0.2", 00:12:30.409 "trsvcid": "4420" 00:12:30.409 }, 00:12:30.409 "peer_address": { 00:12:30.409 "trtype": "TCP", 00:12:30.409 "adrfam": "IPv4", 00:12:30.409 "traddr": "10.0.0.1", 00:12:30.409 "trsvcid": "36044" 00:12:30.409 }, 00:12:30.409 "auth": { 00:12:30.409 "state": "completed", 00:12:30.409 "digest": "sha512", 00:12:30.409 "dhgroup": "ffdhe8192" 00:12:30.409 } 00:12:30.409 } 00:12:30.409 ]' 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.409 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.975 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:02:NDlkMTU1NmYzNWNkMjcyODg5ZTE0NDY0ZjljMGVmYWMyMjZjZWYyMGNmNDQyODZl9N9RXw==: --dhchap-ctrl-secret DHHC-1:01:M2VlZmRjMjU3MTUwZGZiZTQxMGVkNDdiMzA0ZWQ1MmJEQ6DD: 00:12:31.546 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.546 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:31.546 11:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.546 11:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.546 11:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.546 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.546 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:31.546 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:31.815 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:32.381 00:12:32.381 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.381 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.381 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.639 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.639 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.639 11:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.639 11:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.639 11:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.639 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.639 { 00:12:32.639 "cntlid": 143, 00:12:32.639 "qid": 0, 00:12:32.639 "state": "enabled", 00:12:32.639 "thread": "nvmf_tgt_poll_group_000", 00:12:32.639 "listen_address": { 00:12:32.639 "trtype": "TCP", 00:12:32.639 "adrfam": "IPv4", 00:12:32.639 "traddr": "10.0.0.2", 00:12:32.639 "trsvcid": "4420" 00:12:32.639 }, 00:12:32.639 "peer_address": { 00:12:32.639 "trtype": "TCP", 00:12:32.639 "adrfam": "IPv4", 00:12:32.639 "traddr": "10.0.0.1", 00:12:32.639 "trsvcid": "36060" 00:12:32.639 }, 00:12:32.639 "auth": { 00:12:32.639 "state": "completed", 00:12:32.639 "digest": "sha512", 00:12:32.639 "dhgroup": "ffdhe8192" 00:12:32.639 } 00:12:32.639 } 00:12:32.639 ]' 00:12:32.639 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.639 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.639 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.896 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.896 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.896 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.896 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.897 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.154 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:33.720 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.978 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.544 00:12:34.544 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.544 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.544 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.802 { 00:12:34.802 "cntlid": 145, 00:12:34.802 "qid": 0, 00:12:34.802 "state": "enabled", 00:12:34.802 "thread": "nvmf_tgt_poll_group_000", 00:12:34.802 "listen_address": { 00:12:34.802 "trtype": "TCP", 00:12:34.802 "adrfam": "IPv4", 00:12:34.802 "traddr": "10.0.0.2", 00:12:34.802 "trsvcid": "4420" 00:12:34.802 }, 00:12:34.802 "peer_address": { 00:12:34.802 "trtype": "TCP", 00:12:34.802 "adrfam": "IPv4", 00:12:34.802 "traddr": "10.0.0.1", 00:12:34.802 "trsvcid": "33104" 00:12:34.802 }, 00:12:34.802 "auth": { 00:12:34.802 "state": "completed", 00:12:34.802 "digest": "sha512", 00:12:34.802 "dhgroup": "ffdhe8192" 00:12:34.802 } 00:12:34.802 } 00:12:34.802 ]' 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.802 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.081 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.081 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.081 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.339 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:00:NWI2OWFlOWM3ZDc1YzZiYjljZjFiOGVmZDAxMjE5NmQzOTBhMzliM2M3NDgwODQ5X5O9GQ==: --dhchap-ctrl-secret DHHC-1:03:ZDk5MDVjNzc4Y2JjNTk0MmU2Y2E3YzVlODc4MmQxYjljYmRmOTFlMDdjOGY2ZDk5YTY1YzA0ZjhkYjQ3MDA3Mf0Oa6s=: 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:35.951 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:36.519 request: 00:12:36.519 { 00:12:36.519 "name": "nvme0", 00:12:36.519 "trtype": "tcp", 00:12:36.519 "traddr": "10.0.0.2", 00:12:36.519 "adrfam": "ipv4", 00:12:36.519 "trsvcid": "4420", 00:12:36.519 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:36.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0", 00:12:36.519 "prchk_reftag": false, 00:12:36.519 "prchk_guard": false, 00:12:36.519 "hdgst": false, 00:12:36.519 "ddgst": false, 00:12:36.519 "dhchap_key": "key2", 00:12:36.519 "method": "bdev_nvme_attach_controller", 00:12:36.519 "req_id": 1 00:12:36.519 } 00:12:36.519 Got JSON-RPC error response 00:12:36.519 response: 00:12:36.519 { 00:12:36.519 "code": -5, 00:12:36.519 "message": "Input/output error" 00:12:36.519 } 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:36.519 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:37.101 request: 00:12:37.101 { 00:12:37.101 "name": "nvme0", 00:12:37.101 "trtype": "tcp", 00:12:37.101 "traddr": "10.0.0.2", 00:12:37.101 "adrfam": "ipv4", 00:12:37.101 "trsvcid": "4420", 00:12:37.101 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:37.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0", 00:12:37.101 "prchk_reftag": false, 00:12:37.101 "prchk_guard": false, 00:12:37.101 "hdgst": false, 00:12:37.101 "ddgst": false, 00:12:37.101 "dhchap_key": "key1", 00:12:37.101 "dhchap_ctrlr_key": "ckey2", 00:12:37.101 "method": "bdev_nvme_attach_controller", 00:12:37.101 "req_id": 1 00:12:37.101 } 00:12:37.101 Got JSON-RPC error response 00:12:37.101 response: 00:12:37.101 { 00:12:37.101 "code": -5, 00:12:37.101 "message": "Input/output error" 00:12:37.101 } 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key1 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.101 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.669 request: 00:12:37.669 { 00:12:37.669 "name": "nvme0", 00:12:37.669 "trtype": "tcp", 00:12:37.669 "traddr": "10.0.0.2", 00:12:37.669 "adrfam": "ipv4", 00:12:37.669 "trsvcid": "4420", 00:12:37.669 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:37.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0", 00:12:37.669 "prchk_reftag": false, 00:12:37.669 "prchk_guard": false, 00:12:37.669 "hdgst": false, 00:12:37.669 "ddgst": false, 00:12:37.669 "dhchap_key": "key1", 00:12:37.669 "dhchap_ctrlr_key": "ckey1", 00:12:37.669 "method": "bdev_nvme_attach_controller", 00:12:37.669 "req_id": 1 00:12:37.669 } 00:12:37.669 Got JSON-RPC error response 00:12:37.669 response: 00:12:37.669 { 00:12:37.669 "code": -5, 00:12:37.669 "message": "Input/output error" 00:12:37.669 } 00:12:37.669 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:37.669 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:37.669 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:37.669 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:37.669 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:37.669 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.669 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69292 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69292 ']' 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69292 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69292 00:12:37.669 killing process with pid 69292 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69292' 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69292 00:12:37.669 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69292 00:12:37.927 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:37.927 11:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.927 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.927 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.927 11:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72335 00:12:37.928 11:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72335 00:12:37.928 11:36:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:37.928 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72335 ']' 00:12:37.928 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.928 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.928 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.928 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.928 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.863 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.863 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:38.863 11:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.863 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:38.863 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.121 11:36:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.121 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:39.121 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72335 00:12:39.121 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72335 ']' 00:12:39.121 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.121 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.121 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.121 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.121 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.380 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.947 00:12:39.947 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.947 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.947 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.206 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.206 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.206 11:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.206 11:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.206 11:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.206 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.206 { 00:12:40.206 "cntlid": 1, 00:12:40.206 "qid": 0, 00:12:40.206 "state": "enabled", 00:12:40.206 "thread": "nvmf_tgt_poll_group_000", 00:12:40.206 "listen_address": { 00:12:40.206 "trtype": "TCP", 00:12:40.206 "adrfam": "IPv4", 00:12:40.206 "traddr": "10.0.0.2", 00:12:40.206 "trsvcid": "4420" 00:12:40.206 }, 00:12:40.206 "peer_address": { 00:12:40.206 "trtype": "TCP", 00:12:40.206 "adrfam": "IPv4", 00:12:40.206 "traddr": "10.0.0.1", 00:12:40.206 "trsvcid": "33146" 00:12:40.206 }, 00:12:40.206 "auth": { 00:12:40.206 "state": "completed", 00:12:40.206 "digest": "sha512", 00:12:40.206 "dhgroup": "ffdhe8192" 00:12:40.206 } 00:12:40.206 } 00:12:40.206 ]' 00:12:40.206 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.206 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.206 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.464 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.464 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.464 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.464 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.464 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.722 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid 7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-secret DHHC-1:03:NzAwN2JlNjAyOGM3ZWExYTU3NTNiYzMwNzg5MGQyMDA3MWRjYTdlOWFmOWQwNTk2MmQxZThlM2FhZWEyMTgyM5+jejI=: 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --dhchap-key key3 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:41.287 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:41.543 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.543 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:41.543 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.543 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:41.543 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.543 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:41.543 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.543 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.543 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.801 request: 00:12:41.801 { 00:12:41.801 "name": "nvme0", 00:12:41.801 "trtype": "tcp", 00:12:41.801 "traddr": "10.0.0.2", 00:12:41.801 "adrfam": "ipv4", 00:12:41.801 "trsvcid": "4420", 00:12:41.801 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:41.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0", 00:12:41.801 "prchk_reftag": false, 00:12:41.801 "prchk_guard": false, 00:12:41.801 "hdgst": false, 00:12:41.801 "ddgst": false, 00:12:41.801 "dhchap_key": "key3", 00:12:41.801 "method": "bdev_nvme_attach_controller", 00:12:41.801 "req_id": 1 00:12:41.801 } 00:12:41.801 Got JSON-RPC error response 00:12:41.801 response: 00:12:41.801 { 00:12:41.801 "code": -5, 00:12:41.801 "message": "Input/output error" 00:12:41.801 } 00:12:41.801 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:42.059 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.059 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:42.059 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.059 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:42.059 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:42.059 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:42.059 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:42.317 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.317 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:42.317 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.317 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:42.318 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.318 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:42.318 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.318 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.318 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.318 request: 00:12:42.318 { 00:12:42.318 "name": "nvme0", 00:12:42.318 "trtype": "tcp", 00:12:42.318 "traddr": "10.0.0.2", 00:12:42.318 "adrfam": "ipv4", 00:12:42.318 "trsvcid": "4420", 00:12:42.318 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:42.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0", 00:12:42.318 "prchk_reftag": false, 00:12:42.318 "prchk_guard": false, 00:12:42.318 "hdgst": false, 00:12:42.318 "ddgst": false, 00:12:42.318 "dhchap_key": "key3", 00:12:42.318 "method": "bdev_nvme_attach_controller", 00:12:42.318 "req_id": 1 00:12:42.318 } 00:12:42.318 Got JSON-RPC error response 00:12:42.318 response: 00:12:42.318 { 00:12:42.318 "code": -5, 00:12:42.318 "message": "Input/output error" 00:12:42.318 } 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.576 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:42.576 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:43.142 request: 00:12:43.142 { 00:12:43.142 "name": "nvme0", 00:12:43.142 "trtype": "tcp", 00:12:43.142 "traddr": "10.0.0.2", 00:12:43.142 "adrfam": "ipv4", 00:12:43.142 "trsvcid": "4420", 00:12:43.142 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:43.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0", 00:12:43.142 "prchk_reftag": false, 00:12:43.142 "prchk_guard": false, 00:12:43.142 "hdgst": false, 00:12:43.142 "ddgst": false, 00:12:43.142 "dhchap_key": "key0", 00:12:43.142 "dhchap_ctrlr_key": "key1", 00:12:43.142 "method": "bdev_nvme_attach_controller", 00:12:43.142 "req_id": 1 00:12:43.142 } 00:12:43.142 Got JSON-RPC error response 00:12:43.142 response: 00:12:43.142 { 00:12:43.142 "code": -5, 00:12:43.142 "message": "Input/output error" 00:12:43.142 } 00:12:43.142 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:43.142 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:43.142 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:43.142 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:43.142 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:43.142 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:43.401 00:12:43.401 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:43.401 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.401 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:43.658 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.658 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.658 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69326 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69326 ']' 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69326 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69326 00:12:43.925 killing process with pid 69326 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69326' 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69326 00:12:43.925 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69326 00:12:44.196 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:44.196 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.196 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:44.196 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.196 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:44.196 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.196 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.196 rmmod nvme_tcp 00:12:44.196 rmmod nvme_fabrics 00:12:44.196 rmmod nvme_keyring 00:12:44.453 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.453 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:44.453 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:44.453 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72335 ']' 00:12:44.453 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72335 00:12:44.453 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72335 ']' 00:12:44.454 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72335 00:12:44.454 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:44.454 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:44.454 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72335 00:12:44.454 killing process with pid 72335 00:12:44.454 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:44.454 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:44.454 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72335' 00:12:44.454 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72335 00:12:44.454 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72335 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Lwv /tmp/spdk.key-sha256.SKR /tmp/spdk.key-sha384.JDX /tmp/spdk.key-sha512.OWN /tmp/spdk.key-sha512.4ZG /tmp/spdk.key-sha384.OV5 /tmp/spdk.key-sha256.LG0 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:44.711 00:12:44.711 real 2m49.847s 00:12:44.711 user 6m45.610s 00:12:44.711 sys 0m27.234s 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.711 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.711 ************************************ 00:12:44.711 END TEST nvmf_auth_target 00:12:44.711 ************************************ 00:12:44.711 11:36:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:44.711 11:36:47 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:12:44.711 11:36:47 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:44.711 11:36:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:44.711 11:36:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.711 11:36:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:44.712 ************************************ 00:12:44.712 START TEST nvmf_bdevio_no_huge 00:12:44.712 ************************************ 00:12:44.712 11:36:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:44.712 * Looking for test storage... 00:12:44.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:44.712 Cannot find device "nvmf_tgt_br" 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:44.712 Cannot find device "nvmf_tgt_br2" 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:44.712 Cannot find device "nvmf_tgt_br" 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:44.712 Cannot find device "nvmf_tgt_br2" 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:44.712 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:44.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:44.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:44.969 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:44.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:12:44.970 00:12:44.970 --- 10.0.0.2 ping statistics --- 00:12:44.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.970 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:44.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:44.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:12:44.970 00:12:44.970 --- 10.0.0.3 ping statistics --- 00:12:44.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.970 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:44.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:44.970 00:12:44.970 --- 10.0.0.1 ping statistics --- 00:12:44.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.970 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72645 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72645 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72645 ']' 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.970 11:36:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:45.228 [2024-07-12 11:36:48.473961] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:12:45.228 [2024-07-12 11:36:48.474053] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:45.228 [2024-07-12 11:36:48.615520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.484 [2024-07-12 11:36:48.737921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.484 [2024-07-12 11:36:48.738191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.484 [2024-07-12 11:36:48.738333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.484 [2024-07-12 11:36:48.738389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.484 [2024-07-12 11:36:48.738515] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.484 [2024-07-12 11:36:48.738731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:45.484 [2024-07-12 11:36:48.738810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:45.484 [2024-07-12 11:36:48.738884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:45.484 [2024-07-12 11:36:48.738885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.484 [2024-07-12 11:36:48.743873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:46.048 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.048 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:12:46.048 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:46.048 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:46.048 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.048 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.048 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.048 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.048 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.049 [2024-07-12 11:36:49.430201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.049 Malloc0 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:46.049 [2024-07-12 11:36:49.474372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:46.049 { 00:12:46.049 "params": { 00:12:46.049 "name": "Nvme$subsystem", 00:12:46.049 "trtype": "$TEST_TRANSPORT", 00:12:46.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:46.049 "adrfam": "ipv4", 00:12:46.049 "trsvcid": "$NVMF_PORT", 00:12:46.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:46.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:46.049 "hdgst": ${hdgst:-false}, 00:12:46.049 "ddgst": ${ddgst:-false} 00:12:46.049 }, 00:12:46.049 "method": "bdev_nvme_attach_controller" 00:12:46.049 } 00:12:46.049 EOF 00:12:46.049 )") 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:46.049 11:36:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:46.049 "params": { 00:12:46.049 "name": "Nvme1", 00:12:46.049 "trtype": "tcp", 00:12:46.049 "traddr": "10.0.0.2", 00:12:46.049 "adrfam": "ipv4", 00:12:46.049 "trsvcid": "4420", 00:12:46.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:46.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:46.049 "hdgst": false, 00:12:46.049 "ddgst": false 00:12:46.049 }, 00:12:46.049 "method": "bdev_nvme_attach_controller" 00:12:46.049 }' 00:12:46.306 [2024-07-12 11:36:49.539789] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:12:46.306 [2024-07-12 11:36:49.539927] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72687 ] 00:12:46.306 [2024-07-12 11:36:49.692347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:46.564 [2024-07-12 11:36:49.813824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.564 [2024-07-12 11:36:49.813953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.564 [2024-07-12 11:36:49.813959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.564 [2024-07-12 11:36:49.827553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:46.564 I/O targets: 00:12:46.564 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:46.564 00:12:46.564 00:12:46.564 CUnit - A unit testing framework for C - Version 2.1-3 00:12:46.564 http://cunit.sourceforge.net/ 00:12:46.564 00:12:46.564 00:12:46.564 Suite: bdevio tests on: Nvme1n1 00:12:46.564 Test: blockdev write read block ...passed 00:12:46.564 Test: blockdev write zeroes read block ...passed 00:12:46.564 Test: blockdev write zeroes read no split ...passed 00:12:46.564 Test: blockdev write zeroes read split ...passed 00:12:46.821 Test: blockdev write zeroes read split partial ...passed 00:12:46.821 Test: blockdev reset ...[2024-07-12 11:36:50.018149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:46.821 [2024-07-12 11:36:50.018424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1092870 (9): Bad file descriptor 00:12:46.822 [2024-07-12 11:36:50.034022] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:46.822 passed 00:12:46.822 Test: blockdev write read 8 blocks ...passed 00:12:46.822 Test: blockdev write read size > 128k ...passed 00:12:46.822 Test: blockdev write read invalid size ...passed 00:12:46.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:46.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:46.822 Test: blockdev write read max offset ...passed 00:12:46.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:46.822 Test: blockdev writev readv 8 blocks ...passed 00:12:46.822 Test: blockdev writev readv 30 x 1block ...passed 00:12:46.822 Test: blockdev writev readv block ...passed 00:12:46.822 Test: blockdev writev readv size > 128k ...passed 00:12:46.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:46.822 Test: blockdev comparev and writev ...[2024-07-12 11:36:50.044758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.822 [2024-07-12 11:36:50.044810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.044833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.822 [2024-07-12 11:36:50.044845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.045148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.822 [2024-07-12 11:36:50.045172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.045190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.822 [2024-07-12 11:36:50.045200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.045465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.822 [2024-07-12 11:36:50.045486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.045503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.822 [2024-07-12 11:36:50.045513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.046052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.822 [2024-07-12 11:36:50.046089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.046108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.822 [2024-07-12 11:36:50.046119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:46.822 passed 00:12:46.822 Test: blockdev nvme passthru rw ...passed 00:12:46.822 Test: blockdev nvme passthru vendor specific ...passed 00:12:46.822 Test: blockdev nvme admin passthru ...[2024-07-12 11:36:50.047276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:46.822 [2024-07-12 11:36:50.047476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.047614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:46.822 [2024-07-12 11:36:50.047634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.047743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:46.822 [2024-07-12 11:36:50.047759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:46.822 [2024-07-12 11:36:50.047863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:46.822 [2024-07-12 11:36:50.047879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:46.822 passed 00:12:46.822 Test: blockdev copy ...passed 00:12:46.822 00:12:46.822 Run Summary: Type Total Ran Passed Failed Inactive 00:12:46.822 suites 1 1 n/a 0 0 00:12:46.822 tests 23 23 23 0 0 00:12:46.822 asserts 152 152 152 0 n/a 00:12:46.822 00:12:46.822 Elapsed time = 0.160 seconds 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.080 rmmod nvme_tcp 00:12:47.080 rmmod nvme_fabrics 00:12:47.080 rmmod nvme_keyring 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72645 ']' 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72645 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72645 ']' 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72645 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72645 00:12:47.080 killing process with pid 72645 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72645' 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72645 00:12:47.080 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72645 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:47.648 00:12:47.648 real 0m2.964s 00:12:47.648 user 0m9.755s 00:12:47.648 sys 0m1.169s 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:47.648 ************************************ 00:12:47.648 END TEST nvmf_bdevio_no_huge 00:12:47.648 ************************************ 00:12:47.648 11:36:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:47.648 11:36:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:47.648 11:36:50 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:47.648 11:36:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:47.648 11:36:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.648 11:36:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:47.648 ************************************ 00:12:47.648 START TEST nvmf_tls 00:12:47.648 ************************************ 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:47.648 * Looking for test storage... 00:12:47.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:12:47.648 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.649 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:47.907 Cannot find device "nvmf_tgt_br" 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:47.907 Cannot find device "nvmf_tgt_br2" 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:47.907 Cannot find device "nvmf_tgt_br" 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:47.907 Cannot find device "nvmf_tgt_br2" 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:47.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:47.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:47.907 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:47.908 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:47.908 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:47.908 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:47.908 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:47.908 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:48.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:12:48.166 00:12:48.166 --- 10.0.0.2 ping statistics --- 00:12:48.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.166 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:48.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:48.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:12:48.166 00:12:48.166 --- 10.0.0.3 ping statistics --- 00:12:48.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.166 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:48.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:48.166 00:12:48.166 --- 10.0.0.1 ping statistics --- 00:12:48.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.166 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:48.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72864 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72864 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72864 ']' 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.166 11:36:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:48.166 [2024-07-12 11:36:51.512005] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:12:48.166 [2024-07-12 11:36:51.512090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.426 [2024-07-12 11:36:51.646154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.426 [2024-07-12 11:36:51.784214] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.426 [2024-07-12 11:36:51.784276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.426 [2024-07-12 11:36:51.784290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.426 [2024-07-12 11:36:51.784299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.426 [2024-07-12 11:36:51.784306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.426 [2024-07-12 11:36:51.784333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:49.361 true 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:49.361 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:49.620 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:49.620 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:49.620 11:36:52 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:49.878 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:49.878 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:50.136 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:50.136 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:50.136 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:50.394 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:50.394 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:50.652 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:50.652 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:50.652 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:50.652 11:36:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:50.909 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:50.910 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:50.910 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:51.167 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:51.167 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:51.425 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:51.425 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:51.425 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:51.683 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:51.683 11:36:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:51.941 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:51.941 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:51.941 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:51.941 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.vRxMqbRYFB 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.kLYkV1JOjL 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.vRxMqbRYFB 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kLYkV1JOjL 00:12:51.942 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:52.200 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:52.765 [2024-07-12 11:36:55.938278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:52.765 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.vRxMqbRYFB 00:12:52.765 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vRxMqbRYFB 00:12:52.765 11:36:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:53.024 [2024-07-12 11:36:56.270190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.024 11:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:53.282 11:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:53.540 [2024-07-12 11:36:56.734260] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:53.540 [2024-07-12 11:36:56.734495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.540 11:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:53.540 malloc0 00:12:53.801 11:36:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:54.059 11:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vRxMqbRYFB 00:12:54.059 [2024-07-12 11:36:57.501695] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:54.318 11:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.vRxMqbRYFB 00:13:04.294 Initializing NVMe Controllers 00:13:04.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:04.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:04.294 Initialization complete. Launching workers. 00:13:04.294 ======================================================== 00:13:04.294 Latency(us) 00:13:04.294 Device Information : IOPS MiB/s Average min max 00:13:04.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9293.27 36.30 6888.44 1502.22 10242.53 00:13:04.294 ======================================================== 00:13:04.294 Total : 9293.27 36.30 6888.44 1502.22 10242.53 00:13:04.294 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vRxMqbRYFB 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vRxMqbRYFB' 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73096 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73096 /var/tmp/bdevperf.sock 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73096 ']' 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:04.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.294 11:37:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.552 [2024-07-12 11:37:07.783175] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:04.552 [2024-07-12 11:37:07.783534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73096 ] 00:13:04.552 [2024-07-12 11:37:07.926788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.810 [2024-07-12 11:37:08.044897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.810 [2024-07-12 11:37:08.098667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:05.377 11:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.377 11:37:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:05.377 11:37:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vRxMqbRYFB 00:13:05.944 [2024-07-12 11:37:09.098085] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:05.944 [2024-07-12 11:37:09.098589] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:05.944 TLSTESTn1 00:13:05.944 11:37:09 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:05.944 Running I/O for 10 seconds... 00:13:15.925 00:13:15.925 Latency(us) 00:13:15.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.925 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:15.925 Verification LBA range: start 0x0 length 0x2000 00:13:15.925 TLSTESTn1 : 10.02 3834.39 14.98 0.00 0.00 33317.42 7119.59 39083.29 00:13:15.925 =================================================================================================================== 00:13:15.925 Total : 3834.39 14.98 0.00 0.00 33317.42 7119.59 39083.29 00:13:15.925 0 00:13:15.925 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:15.925 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73096 00:13:15.925 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73096 ']' 00:13:15.925 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73096 00:13:15.925 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:15.925 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.925 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73096 00:13:16.183 killing process with pid 73096 00:13:16.183 Received shutdown signal, test time was about 10.000000 seconds 00:13:16.183 00:13:16.183 Latency(us) 00:13:16.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.183 =================================================================================================================== 00:13:16.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73096' 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73096 00:13:16.183 [2024-07-12 11:37:19.376910] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73096 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kLYkV1JOjL 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kLYkV1JOjL 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:16.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kLYkV1JOjL 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kLYkV1JOjL' 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73230 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73230 /var/tmp/bdevperf.sock 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73230 ']' 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.183 11:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:16.441 [2024-07-12 11:37:19.656390] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:16.441 [2024-07-12 11:37:19.657014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73230 ] 00:13:16.441 [2024-07-12 11:37:19.792773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.699 [2024-07-12 11:37:19.915088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.699 [2024-07-12 11:37:19.970434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:17.263 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.263 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:17.263 11:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kLYkV1JOjL 00:13:17.521 [2024-07-12 11:37:20.911754] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:17.521 [2024-07-12 11:37:20.912512] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:17.521 [2024-07-12 11:37:20.923035] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:17.521 [2024-07-12 11:37:20.923611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23101f0 (107): Transport endpoint is not connected 00:13:17.521 [2024-07-12 11:37:20.924803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23101f0 (9): Bad file descriptor 00:13:17.521 [2024-07-12 11:37:20.925701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:17.521 [2024-07-12 11:37:20.925733] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:17.521 [2024-07-12 11:37:20.925751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:17.521 request: 00:13:17.521 { 00:13:17.521 "name": "TLSTEST", 00:13:17.521 "trtype": "tcp", 00:13:17.521 "traddr": "10.0.0.2", 00:13:17.521 "adrfam": "ipv4", 00:13:17.521 "trsvcid": "4420", 00:13:17.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:17.521 "prchk_reftag": false, 00:13:17.521 "prchk_guard": false, 00:13:17.521 "hdgst": false, 00:13:17.521 "ddgst": false, 00:13:17.521 "psk": "/tmp/tmp.kLYkV1JOjL", 00:13:17.521 "method": "bdev_nvme_attach_controller", 00:13:17.521 "req_id": 1 00:13:17.521 } 00:13:17.521 Got JSON-RPC error response 00:13:17.521 response: 00:13:17.521 { 00:13:17.521 "code": -5, 00:13:17.521 "message": "Input/output error" 00:13:17.521 } 00:13:17.521 11:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73230 00:13:17.521 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73230 ']' 00:13:17.521 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73230 00:13:17.521 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:17.521 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.780 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73230 00:13:17.780 killing process with pid 73230 00:13:17.780 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.780 00:13:17.780 Latency(us) 00:13:17.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.780 =================================================================================================================== 00:13:17.780 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:17.780 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:17.780 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:17.780 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73230' 00:13:17.780 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73230 00:13:17.780 [2024-07-12 11:37:20.986941] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:17.780 11:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73230 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vRxMqbRYFB 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vRxMqbRYFB 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vRxMqbRYFB 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vRxMqbRYFB' 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73263 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73263 /var/tmp/bdevperf.sock 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73263 ']' 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:17.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.780 11:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.037 [2024-07-12 11:37:21.274924] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:18.037 [2024-07-12 11:37:21.275349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73263 ] 00:13:18.037 [2024-07-12 11:37:21.415527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.294 [2024-07-12 11:37:21.534204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.294 [2024-07-12 11:37:21.588434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.vRxMqbRYFB 00:13:19.226 [2024-07-12 11:37:22.532674] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:19.226 [2024-07-12 11:37:22.532838] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:19.226 [2024-07-12 11:37:22.543026] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:19.226 [2024-07-12 11:37:22.543076] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:19.226 [2024-07-12 11:37:22.543136] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:19.226 [2024-07-12 11:37:22.543735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23071f0 (107): Transport endpoint is not connected 00:13:19.226 [2024-07-12 11:37:22.544715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23071f0 (9): Bad file descriptor 00:13:19.226 [2024-07-12 11:37:22.545711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:19.226 [2024-07-12 11:37:22.545751] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:19.226 [2024-07-12 11:37:22.545780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:19.226 request: 00:13:19.226 { 00:13:19.226 "name": "TLSTEST", 00:13:19.226 "trtype": "tcp", 00:13:19.226 "traddr": "10.0.0.2", 00:13:19.226 "adrfam": "ipv4", 00:13:19.226 "trsvcid": "4420", 00:13:19.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.226 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:19.226 "prchk_reftag": false, 00:13:19.226 "prchk_guard": false, 00:13:19.226 "hdgst": false, 00:13:19.226 "ddgst": false, 00:13:19.226 "psk": "/tmp/tmp.vRxMqbRYFB", 00:13:19.226 "method": "bdev_nvme_attach_controller", 00:13:19.226 "req_id": 1 00:13:19.226 } 00:13:19.226 Got JSON-RPC error response 00:13:19.226 response: 00:13:19.226 { 00:13:19.226 "code": -5, 00:13:19.226 "message": "Input/output error" 00:13:19.226 } 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73263 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73263 ']' 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73263 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73263 00:13:19.226 killing process with pid 73263 00:13:19.226 Received shutdown signal, test time was about 10.000000 seconds 00:13:19.226 00:13:19.226 Latency(us) 00:13:19.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.226 =================================================================================================================== 00:13:19.226 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73263' 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73263 00:13:19.226 [2024-07-12 11:37:22.585642] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:19.226 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73263 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vRxMqbRYFB 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vRxMqbRYFB 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vRxMqbRYFB 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vRxMqbRYFB' 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73285 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:19.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73285 /var/tmp/bdevperf.sock 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73285 ']' 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.484 11:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.484 [2024-07-12 11:37:22.859687] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:19.484 [2024-07-12 11:37:22.859774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73285 ] 00:13:19.747 [2024-07-12 11:37:22.998536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.747 [2024-07-12 11:37:23.131006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.747 [2024-07-12 11:37:23.184626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:20.683 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.683 11:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:20.683 11:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vRxMqbRYFB 00:13:20.683 [2024-07-12 11:37:24.035466] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:20.683 [2024-07-12 11:37:24.035618] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:20.683 [2024-07-12 11:37:24.043244] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:20.683 [2024-07-12 11:37:24.043287] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:20.683 [2024-07-12 11:37:24.043368] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:20.683 [2024-07-12 11:37:24.043369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d131f0 (107): Transport endpoint is not connected 00:13:20.683 [2024-07-12 11:37:24.044358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d131f0 (9): Bad file descriptor 00:13:20.683 [2024-07-12 11:37:24.045353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:20.683 [2024-07-12 11:37:24.045383] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:20.683 [2024-07-12 11:37:24.045400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:20.683 request: 00:13:20.683 { 00:13:20.683 "name": "TLSTEST", 00:13:20.683 "trtype": "tcp", 00:13:20.683 "traddr": "10.0.0.2", 00:13:20.683 "adrfam": "ipv4", 00:13:20.683 "trsvcid": "4420", 00:13:20.683 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:20.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:20.683 "prchk_reftag": false, 00:13:20.683 "prchk_guard": false, 00:13:20.683 "hdgst": false, 00:13:20.683 "ddgst": false, 00:13:20.683 "psk": "/tmp/tmp.vRxMqbRYFB", 00:13:20.683 "method": "bdev_nvme_attach_controller", 00:13:20.683 "req_id": 1 00:13:20.683 } 00:13:20.683 Got JSON-RPC error response 00:13:20.684 response: 00:13:20.684 { 00:13:20.684 "code": -5, 00:13:20.684 "message": "Input/output error" 00:13:20.684 } 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73285 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73285 ']' 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73285 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73285 00:13:20.684 killing process with pid 73285 00:13:20.684 Received shutdown signal, test time was about 10.000000 seconds 00:13:20.684 00:13:20.684 Latency(us) 00:13:20.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.684 =================================================================================================================== 00:13:20.684 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73285' 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73285 00:13:20.684 [2024-07-12 11:37:24.091277] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:20.684 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73285 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73313 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73313 /var/tmp/bdevperf.sock 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73313 ']' 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.943 11:37:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.943 [2024-07-12 11:37:24.363364] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:20.943 [2024-07-12 11:37:24.363451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73313 ] 00:13:21.201 [2024-07-12 11:37:24.493177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.202 [2024-07-12 11:37:24.609255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.460 [2024-07-12 11:37:24.662494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:22.027 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.027 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:22.027 11:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:22.285 [2024-07-12 11:37:25.623918] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:22.285 [2024-07-12 11:37:25.625663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166bc00 (9): Bad file descriptor 00:13:22.285 [2024-07-12 11:37:25.626656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:22.285 [2024-07-12 11:37:25.626706] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:22.285 [2024-07-12 11:37:25.626733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:22.285 request: 00:13:22.285 { 00:13:22.285 "name": "TLSTEST", 00:13:22.285 "trtype": "tcp", 00:13:22.285 "traddr": "10.0.0.2", 00:13:22.285 "adrfam": "ipv4", 00:13:22.285 "trsvcid": "4420", 00:13:22.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:22.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.285 "prchk_reftag": false, 00:13:22.285 "prchk_guard": false, 00:13:22.285 "hdgst": false, 00:13:22.285 "ddgst": false, 00:13:22.285 "method": "bdev_nvme_attach_controller", 00:13:22.285 "req_id": 1 00:13:22.285 } 00:13:22.285 Got JSON-RPC error response 00:13:22.285 response: 00:13:22.285 { 00:13:22.285 "code": -5, 00:13:22.285 "message": "Input/output error" 00:13:22.285 } 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73313 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73313 ']' 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73313 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73313 00:13:22.285 killing process with pid 73313 00:13:22.285 Received shutdown signal, test time was about 10.000000 seconds 00:13:22.285 00:13:22.285 Latency(us) 00:13:22.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.285 =================================================================================================================== 00:13:22.285 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73313' 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73313 00:13:22.285 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73313 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72864 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72864 ']' 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72864 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72864 00:13:22.543 killing process with pid 72864 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72864' 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72864 00:13:22.543 [2024-07-12 11:37:25.922230] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:22.543 11:37:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72864 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.PG0CDOhMqD 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.PG0CDOhMqD 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73356 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73356 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73356 ']' 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.800 11:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.058 [2024-07-12 11:37:26.270814] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:23.058 [2024-07-12 11:37:26.270913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.058 [2024-07-12 11:37:26.408920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.316 [2024-07-12 11:37:26.527170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.316 [2024-07-12 11:37:26.527228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.316 [2024-07-12 11:37:26.527240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.316 [2024-07-12 11:37:26.527253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.316 [2024-07-12 11:37:26.527261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.316 [2024-07-12 11:37:26.527288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.316 [2024-07-12 11:37:26.580482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:23.882 11:37:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.882 11:37:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:23.882 11:37:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.882 11:37:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.882 11:37:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.882 11:37:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.882 11:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.PG0CDOhMqD 00:13:23.882 11:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PG0CDOhMqD 00:13:23.882 11:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:24.140 [2024-07-12 11:37:27.496319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.140 11:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:24.399 11:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:24.657 [2024-07-12 11:37:28.020388] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:24.657 [2024-07-12 11:37:28.020674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.657 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:24.965 malloc0 00:13:24.965 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:25.227 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PG0CDOhMqD 00:13:25.485 [2024-07-12 11:37:28.768440] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PG0CDOhMqD 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PG0CDOhMqD' 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73405 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73405 /var/tmp/bdevperf.sock 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73405 ']' 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:25.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.485 11:37:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.485 [2024-07-12 11:37:28.837870] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:25.485 [2024-07-12 11:37:28.837964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73405 ] 00:13:25.744 [2024-07-12 11:37:28.969821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.744 [2024-07-12 11:37:29.112088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.744 [2024-07-12 11:37:29.170274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:26.679 11:37:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.679 11:37:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:26.679 11:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PG0CDOhMqD 00:13:26.679 [2024-07-12 11:37:30.000545] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:26.679 [2024-07-12 11:37:30.000686] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:26.679 TLSTESTn1 00:13:26.679 11:37:30 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:26.937 Running I/O for 10 seconds... 00:13:36.910 00:13:36.911 Latency(us) 00:13:36.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.911 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:36.911 Verification LBA range: start 0x0 length 0x2000 00:13:36.911 TLSTESTn1 : 10.02 3989.30 15.58 0.00 0.00 32022.87 7298.33 23712.12 00:13:36.911 =================================================================================================================== 00:13:36.911 Total : 3989.30 15.58 0.00 0.00 32022.87 7298.33 23712.12 00:13:36.911 0 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73405 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73405 ']' 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73405 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73405 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:36.911 killing process with pid 73405 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73405' 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73405 00:13:36.911 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73405 00:13:36.911 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.911 00:13:36.911 Latency(us) 00:13:36.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.911 =================================================================================================================== 00:13:36.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.911 [2024-07-12 11:37:40.259472] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.PG0CDOhMqD 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PG0CDOhMqD 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PG0CDOhMqD 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PG0CDOhMqD 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PG0CDOhMqD' 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73545 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73545 /var/tmp/bdevperf.sock 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73545 ']' 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.169 11:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.169 [2024-07-12 11:37:40.549907] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:37.169 [2024-07-12 11:37:40.550009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73545 ] 00:13:37.427 [2024-07-12 11:37:40.685856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.427 [2024-07-12 11:37:40.804396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.427 [2024-07-12 11:37:40.862510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PG0CDOhMqD 00:13:38.362 [2024-07-12 11:37:41.755202] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:38.362 [2024-07-12 11:37:41.755294] bdev_nvme.c:6124:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:38.362 [2024-07-12 11:37:41.755307] bdev_nvme.c:6229:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.PG0CDOhMqD 00:13:38.362 request: 00:13:38.362 { 00:13:38.362 "name": "TLSTEST", 00:13:38.362 "trtype": "tcp", 00:13:38.362 "traddr": "10.0.0.2", 00:13:38.362 "adrfam": "ipv4", 00:13:38.362 "trsvcid": "4420", 00:13:38.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.362 "prchk_reftag": false, 00:13:38.362 "prchk_guard": false, 00:13:38.362 "hdgst": false, 00:13:38.362 "ddgst": false, 00:13:38.362 "psk": "/tmp/tmp.PG0CDOhMqD", 00:13:38.362 "method": "bdev_nvme_attach_controller", 00:13:38.362 "req_id": 1 00:13:38.362 } 00:13:38.362 Got JSON-RPC error response 00:13:38.362 response: 00:13:38.362 { 00:13:38.362 "code": -1, 00:13:38.362 "message": "Operation not permitted" 00:13:38.362 } 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73545 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73545 ']' 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73545 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73545 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:38.362 killing process with pid 73545 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73545' 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73545 00:13:38.362 11:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73545 00:13:38.362 Received shutdown signal, test time was about 10.000000 seconds 00:13:38.362 00:13:38.362 Latency(us) 00:13:38.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.362 =================================================================================================================== 00:13:38.362 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73356 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73356 ']' 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73356 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73356 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:38.642 killing process with pid 73356 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73356' 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73356 00:13:38.642 [2024-07-12 11:37:42.054646] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:38.642 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73356 00:13:38.900 11:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73573 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73573 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73573 ']' 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.901 11:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:39.159 [2024-07-12 11:37:42.353990] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:39.159 [2024-07-12 11:37:42.354085] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.159 [2024-07-12 11:37:42.488917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.159 [2024-07-12 11:37:42.607019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.159 [2024-07-12 11:37:42.607093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.159 [2024-07-12 11:37:42.607104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.159 [2024-07-12 11:37:42.607113] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.159 [2024-07-12 11:37:42.607121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.159 [2024-07-12 11:37:42.607158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.418 [2024-07-12 11:37:42.661225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.PG0CDOhMqD 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.PG0CDOhMqD 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.PG0CDOhMqD 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PG0CDOhMqD 00:13:39.987 11:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:40.269 [2024-07-12 11:37:43.585344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.269 11:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:40.526 11:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:40.784 [2024-07-12 11:37:44.185454] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:40.784 [2024-07-12 11:37:44.185719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.784 11:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:41.042 malloc0 00:13:41.042 11:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:41.300 11:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PG0CDOhMqD 00:13:41.558 [2024-07-12 11:37:44.936904] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:41.558 [2024-07-12 11:37:44.936964] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:41.558 [2024-07-12 11:37:44.936999] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:41.558 request: 00:13:41.558 { 00:13:41.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.558 "host": "nqn.2016-06.io.spdk:host1", 00:13:41.558 "psk": "/tmp/tmp.PG0CDOhMqD", 00:13:41.558 "method": "nvmf_subsystem_add_host", 00:13:41.558 "req_id": 1 00:13:41.558 } 00:13:41.558 Got JSON-RPC error response 00:13:41.558 response: 00:13:41.558 { 00:13:41.558 "code": -32603, 00:13:41.558 "message": "Internal error" 00:13:41.558 } 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73573 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73573 ']' 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73573 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73573 00:13:41.558 killing process with pid 73573 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73573' 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73573 00:13:41.558 11:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73573 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.PG0CDOhMqD 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73640 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73640 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73640 ']' 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.816 11:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.074 [2024-07-12 11:37:45.289992] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:42.074 [2024-07-12 11:37:45.290079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.074 [2024-07-12 11:37:45.423268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.331 [2024-07-12 11:37:45.568809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.331 [2024-07-12 11:37:45.569139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.331 [2024-07-12 11:37:45.569295] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.331 [2024-07-12 11:37:45.569494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.331 [2024-07-12 11:37:45.569626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.331 [2024-07-12 11:37:45.569767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.331 [2024-07-12 11:37:45.622524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:42.897 11:37:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.897 11:37:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:42.897 11:37:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.897 11:37:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:42.897 11:37:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.897 11:37:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.897 11:37:46 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.PG0CDOhMqD 00:13:42.897 11:37:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PG0CDOhMqD 00:13:42.897 11:37:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:43.462 [2024-07-12 11:37:46.646375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.462 11:37:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:43.743 11:37:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:44.025 [2024-07-12 11:37:47.182438] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:44.025 [2024-07-12 11:37:47.182702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.025 11:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:44.025 malloc0 00:13:44.025 11:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:44.283 11:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PG0CDOhMqD 00:13:44.542 [2024-07-12 11:37:47.933813] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73696 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73696 /var/tmp/bdevperf.sock 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73696 ']' 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.542 11:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.800 [2024-07-12 11:37:48.018536] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:44.800 [2024-07-12 11:37:48.018985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73696 ] 00:13:44.800 [2024-07-12 11:37:48.157263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.060 [2024-07-12 11:37:48.276196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.060 [2024-07-12 11:37:48.331065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:45.627 11:37:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.627 11:37:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:45.627 11:37:48 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PG0CDOhMqD 00:13:45.886 [2024-07-12 11:37:49.276495] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:45.886 [2024-07-12 11:37:49.276669] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:46.144 TLSTESTn1 00:13:46.144 11:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:46.404 11:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:46.404 "subsystems": [ 00:13:46.404 { 00:13:46.404 "subsystem": "keyring", 00:13:46.404 "config": [] 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "subsystem": "iobuf", 00:13:46.404 "config": [ 00:13:46.404 { 00:13:46.404 "method": "iobuf_set_options", 00:13:46.404 "params": { 00:13:46.404 "small_pool_count": 8192, 00:13:46.404 "large_pool_count": 1024, 00:13:46.404 "small_bufsize": 8192, 00:13:46.404 "large_bufsize": 135168 00:13:46.404 } 00:13:46.404 } 00:13:46.404 ] 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "subsystem": "sock", 00:13:46.404 "config": [ 00:13:46.404 { 00:13:46.404 "method": "sock_set_default_impl", 00:13:46.404 "params": { 00:13:46.404 "impl_name": "uring" 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "sock_impl_set_options", 00:13:46.404 "params": { 00:13:46.404 "impl_name": "ssl", 00:13:46.404 "recv_buf_size": 4096, 00:13:46.404 "send_buf_size": 4096, 00:13:46.404 "enable_recv_pipe": true, 00:13:46.404 "enable_quickack": false, 00:13:46.404 "enable_placement_id": 0, 00:13:46.404 "enable_zerocopy_send_server": true, 00:13:46.404 "enable_zerocopy_send_client": false, 00:13:46.404 "zerocopy_threshold": 0, 00:13:46.404 "tls_version": 0, 00:13:46.404 "enable_ktls": false 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "sock_impl_set_options", 00:13:46.404 "params": { 00:13:46.404 "impl_name": "posix", 00:13:46.404 "recv_buf_size": 2097152, 00:13:46.404 "send_buf_size": 2097152, 00:13:46.404 "enable_recv_pipe": true, 00:13:46.404 "enable_quickack": false, 00:13:46.404 "enable_placement_id": 0, 00:13:46.404 "enable_zerocopy_send_server": true, 00:13:46.404 "enable_zerocopy_send_client": false, 00:13:46.404 "zerocopy_threshold": 0, 00:13:46.404 "tls_version": 0, 00:13:46.404 "enable_ktls": false 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "sock_impl_set_options", 00:13:46.404 "params": { 00:13:46.404 "impl_name": "uring", 00:13:46.404 "recv_buf_size": 2097152, 00:13:46.404 "send_buf_size": 2097152, 00:13:46.404 "enable_recv_pipe": true, 00:13:46.404 "enable_quickack": false, 00:13:46.404 "enable_placement_id": 0, 00:13:46.404 "enable_zerocopy_send_server": false, 00:13:46.404 "enable_zerocopy_send_client": false, 00:13:46.404 "zerocopy_threshold": 0, 00:13:46.404 "tls_version": 0, 00:13:46.404 "enable_ktls": false 00:13:46.404 } 00:13:46.404 } 00:13:46.404 ] 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "subsystem": "vmd", 00:13:46.404 "config": [] 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "subsystem": "accel", 00:13:46.404 "config": [ 00:13:46.404 { 00:13:46.404 "method": "accel_set_options", 00:13:46.404 "params": { 00:13:46.404 "small_cache_size": 128, 00:13:46.404 "large_cache_size": 16, 00:13:46.404 "task_count": 2048, 00:13:46.404 "sequence_count": 2048, 00:13:46.404 "buf_count": 2048 00:13:46.404 } 00:13:46.404 } 00:13:46.404 ] 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "subsystem": "bdev", 00:13:46.404 "config": [ 00:13:46.404 { 00:13:46.404 "method": "bdev_set_options", 00:13:46.404 "params": { 00:13:46.404 "bdev_io_pool_size": 65535, 00:13:46.404 "bdev_io_cache_size": 256, 00:13:46.404 "bdev_auto_examine": true, 00:13:46.404 "iobuf_small_cache_size": 128, 00:13:46.404 "iobuf_large_cache_size": 16 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "bdev_raid_set_options", 00:13:46.404 "params": { 00:13:46.404 "process_window_size_kb": 1024 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "bdev_iscsi_set_options", 00:13:46.404 "params": { 00:13:46.404 "timeout_sec": 30 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "bdev_nvme_set_options", 00:13:46.404 "params": { 00:13:46.404 "action_on_timeout": "none", 00:13:46.404 "timeout_us": 0, 00:13:46.404 "timeout_admin_us": 0, 00:13:46.404 "keep_alive_timeout_ms": 10000, 00:13:46.404 "arbitration_burst": 0, 00:13:46.404 "low_priority_weight": 0, 00:13:46.404 "medium_priority_weight": 0, 00:13:46.404 "high_priority_weight": 0, 00:13:46.404 "nvme_adminq_poll_period_us": 10000, 00:13:46.404 "nvme_ioq_poll_period_us": 0, 00:13:46.404 "io_queue_requests": 0, 00:13:46.404 "delay_cmd_submit": true, 00:13:46.404 "transport_retry_count": 4, 00:13:46.404 "bdev_retry_count": 3, 00:13:46.404 "transport_ack_timeout": 0, 00:13:46.404 "ctrlr_loss_timeout_sec": 0, 00:13:46.404 "reconnect_delay_sec": 0, 00:13:46.404 "fast_io_fail_timeout_sec": 0, 00:13:46.404 "disable_auto_failback": false, 00:13:46.404 "generate_uuids": false, 00:13:46.404 "transport_tos": 0, 00:13:46.404 "nvme_error_stat": false, 00:13:46.404 "rdma_srq_size": 0, 00:13:46.404 "io_path_stat": false, 00:13:46.404 "allow_accel_sequence": false, 00:13:46.404 "rdma_max_cq_size": 0, 00:13:46.404 "rdma_cm_event_timeout_ms": 0, 00:13:46.404 "dhchap_digests": [ 00:13:46.404 "sha256", 00:13:46.404 "sha384", 00:13:46.404 "sha512" 00:13:46.404 ], 00:13:46.404 "dhchap_dhgroups": [ 00:13:46.404 "null", 00:13:46.404 "ffdhe2048", 00:13:46.404 "ffdhe3072", 00:13:46.404 "ffdhe4096", 00:13:46.404 "ffdhe6144", 00:13:46.404 "ffdhe8192" 00:13:46.404 ] 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "bdev_nvme_set_hotplug", 00:13:46.404 "params": { 00:13:46.404 "period_us": 100000, 00:13:46.404 "enable": false 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "bdev_malloc_create", 00:13:46.404 "params": { 00:13:46.404 "name": "malloc0", 00:13:46.404 "num_blocks": 8192, 00:13:46.404 "block_size": 4096, 00:13:46.404 "physical_block_size": 4096, 00:13:46.404 "uuid": "9b585145-04d3-4ed2-96fc-21324d282db1", 00:13:46.404 "optimal_io_boundary": 0 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "bdev_wait_for_examine" 00:13:46.404 } 00:13:46.404 ] 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "subsystem": "nbd", 00:13:46.404 "config": [] 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "subsystem": "scheduler", 00:13:46.404 "config": [ 00:13:46.404 { 00:13:46.404 "method": "framework_set_scheduler", 00:13:46.404 "params": { 00:13:46.404 "name": "static" 00:13:46.404 } 00:13:46.404 } 00:13:46.404 ] 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "subsystem": "nvmf", 00:13:46.404 "config": [ 00:13:46.404 { 00:13:46.404 "method": "nvmf_set_config", 00:13:46.404 "params": { 00:13:46.404 "discovery_filter": "match_any", 00:13:46.404 "admin_cmd_passthru": { 00:13:46.404 "identify_ctrlr": false 00:13:46.404 } 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "method": "nvmf_set_max_subsystems", 00:13:46.404 "params": { 00:13:46.404 "max_subsystems": 1024 00:13:46.404 } 00:13:46.404 }, 00:13:46.404 { 00:13:46.405 "method": "nvmf_set_crdt", 00:13:46.405 "params": { 00:13:46.405 "crdt1": 0, 00:13:46.405 "crdt2": 0, 00:13:46.405 "crdt3": 0 00:13:46.405 } 00:13:46.405 }, 00:13:46.405 { 00:13:46.405 "method": "nvmf_create_transport", 00:13:46.405 "params": { 00:13:46.405 "trtype": "TCP", 00:13:46.405 "max_queue_depth": 128, 00:13:46.405 "max_io_qpairs_per_ctrlr": 127, 00:13:46.405 "in_capsule_data_size": 4096, 00:13:46.405 "max_io_size": 131072, 00:13:46.405 "io_unit_size": 131072, 00:13:46.405 "max_aq_depth": 128, 00:13:46.405 "num_shared_buffers": 511, 00:13:46.405 "buf_cache_size": 4294967295, 00:13:46.405 "dif_insert_or_strip": false, 00:13:46.405 "zcopy": false, 00:13:46.405 "c2h_success": false, 00:13:46.405 "sock_priority": 0, 00:13:46.405 "abort_timeout_sec": 1, 00:13:46.405 "ack_timeout": 0, 00:13:46.405 "data_wr_pool_size": 0 00:13:46.405 } 00:13:46.405 }, 00:13:46.405 { 00:13:46.405 "method": "nvmf_create_subsystem", 00:13:46.405 "params": { 00:13:46.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.405 "allow_any_host": false, 00:13:46.405 "serial_number": "SPDK00000000000001", 00:13:46.405 "model_number": "SPDK bdev Controller", 00:13:46.405 "max_namespaces": 10, 00:13:46.405 "min_cntlid": 1, 00:13:46.405 "max_cntlid": 65519, 00:13:46.405 "ana_reporting": false 00:13:46.405 } 00:13:46.405 }, 00:13:46.405 { 00:13:46.405 "method": "nvmf_subsystem_add_host", 00:13:46.405 "params": { 00:13:46.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.405 "host": "nqn.2016-06.io.spdk:host1", 00:13:46.405 "psk": "/tmp/tmp.PG0CDOhMqD" 00:13:46.405 } 00:13:46.405 }, 00:13:46.405 { 00:13:46.405 "method": "nvmf_subsystem_add_ns", 00:13:46.405 "params": { 00:13:46.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.405 "namespace": { 00:13:46.405 "nsid": 1, 00:13:46.405 "bdev_name": "malloc0", 00:13:46.405 "nguid": "9B58514504D34ED296FC21324D282DB1", 00:13:46.405 "uuid": "9b585145-04d3-4ed2-96fc-21324d282db1", 00:13:46.405 "no_auto_visible": false 00:13:46.405 } 00:13:46.405 } 00:13:46.405 }, 00:13:46.405 { 00:13:46.405 "method": "nvmf_subsystem_add_listener", 00:13:46.405 "params": { 00:13:46.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.405 "listen_address": { 00:13:46.405 "trtype": "TCP", 00:13:46.405 "adrfam": "IPv4", 00:13:46.405 "traddr": "10.0.0.2", 00:13:46.405 "trsvcid": "4420" 00:13:46.405 }, 00:13:46.405 "secure_channel": true 00:13:46.405 } 00:13:46.405 } 00:13:46.405 ] 00:13:46.405 } 00:13:46.405 ] 00:13:46.405 }' 00:13:46.405 11:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:46.972 11:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:46.972 "subsystems": [ 00:13:46.972 { 00:13:46.972 "subsystem": "keyring", 00:13:46.972 "config": [] 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "subsystem": "iobuf", 00:13:46.972 "config": [ 00:13:46.972 { 00:13:46.972 "method": "iobuf_set_options", 00:13:46.972 "params": { 00:13:46.972 "small_pool_count": 8192, 00:13:46.972 "large_pool_count": 1024, 00:13:46.972 "small_bufsize": 8192, 00:13:46.972 "large_bufsize": 135168 00:13:46.972 } 00:13:46.972 } 00:13:46.972 ] 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "subsystem": "sock", 00:13:46.972 "config": [ 00:13:46.972 { 00:13:46.972 "method": "sock_set_default_impl", 00:13:46.972 "params": { 00:13:46.972 "impl_name": "uring" 00:13:46.972 } 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "method": "sock_impl_set_options", 00:13:46.972 "params": { 00:13:46.972 "impl_name": "ssl", 00:13:46.972 "recv_buf_size": 4096, 00:13:46.972 "send_buf_size": 4096, 00:13:46.972 "enable_recv_pipe": true, 00:13:46.972 "enable_quickack": false, 00:13:46.972 "enable_placement_id": 0, 00:13:46.972 "enable_zerocopy_send_server": true, 00:13:46.972 "enable_zerocopy_send_client": false, 00:13:46.972 "zerocopy_threshold": 0, 00:13:46.972 "tls_version": 0, 00:13:46.972 "enable_ktls": false 00:13:46.972 } 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "method": "sock_impl_set_options", 00:13:46.972 "params": { 00:13:46.972 "impl_name": "posix", 00:13:46.972 "recv_buf_size": 2097152, 00:13:46.972 "send_buf_size": 2097152, 00:13:46.972 "enable_recv_pipe": true, 00:13:46.972 "enable_quickack": false, 00:13:46.972 "enable_placement_id": 0, 00:13:46.972 "enable_zerocopy_send_server": true, 00:13:46.972 "enable_zerocopy_send_client": false, 00:13:46.972 "zerocopy_threshold": 0, 00:13:46.972 "tls_version": 0, 00:13:46.972 "enable_ktls": false 00:13:46.972 } 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "method": "sock_impl_set_options", 00:13:46.972 "params": { 00:13:46.972 "impl_name": "uring", 00:13:46.972 "recv_buf_size": 2097152, 00:13:46.972 "send_buf_size": 2097152, 00:13:46.972 "enable_recv_pipe": true, 00:13:46.972 "enable_quickack": false, 00:13:46.972 "enable_placement_id": 0, 00:13:46.972 "enable_zerocopy_send_server": false, 00:13:46.972 "enable_zerocopy_send_client": false, 00:13:46.972 "zerocopy_threshold": 0, 00:13:46.972 "tls_version": 0, 00:13:46.972 "enable_ktls": false 00:13:46.972 } 00:13:46.972 } 00:13:46.972 ] 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "subsystem": "vmd", 00:13:46.972 "config": [] 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "subsystem": "accel", 00:13:46.972 "config": [ 00:13:46.972 { 00:13:46.972 "method": "accel_set_options", 00:13:46.972 "params": { 00:13:46.972 "small_cache_size": 128, 00:13:46.972 "large_cache_size": 16, 00:13:46.972 "task_count": 2048, 00:13:46.972 "sequence_count": 2048, 00:13:46.972 "buf_count": 2048 00:13:46.972 } 00:13:46.972 } 00:13:46.972 ] 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "subsystem": "bdev", 00:13:46.972 "config": [ 00:13:46.972 { 00:13:46.972 "method": "bdev_set_options", 00:13:46.972 "params": { 00:13:46.972 "bdev_io_pool_size": 65535, 00:13:46.972 "bdev_io_cache_size": 256, 00:13:46.972 "bdev_auto_examine": true, 00:13:46.972 "iobuf_small_cache_size": 128, 00:13:46.972 "iobuf_large_cache_size": 16 00:13:46.972 } 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "method": "bdev_raid_set_options", 00:13:46.972 "params": { 00:13:46.972 "process_window_size_kb": 1024 00:13:46.972 } 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "method": "bdev_iscsi_set_options", 00:13:46.972 "params": { 00:13:46.972 "timeout_sec": 30 00:13:46.972 } 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "method": "bdev_nvme_set_options", 00:13:46.972 "params": { 00:13:46.972 "action_on_timeout": "none", 00:13:46.972 "timeout_us": 0, 00:13:46.972 "timeout_admin_us": 0, 00:13:46.972 "keep_alive_timeout_ms": 10000, 00:13:46.972 "arbitration_burst": 0, 00:13:46.972 "low_priority_weight": 0, 00:13:46.972 "medium_priority_weight": 0, 00:13:46.972 "high_priority_weight": 0, 00:13:46.972 "nvme_adminq_poll_period_us": 10000, 00:13:46.972 "nvme_ioq_poll_period_us": 0, 00:13:46.972 "io_queue_requests": 512, 00:13:46.972 "delay_cmd_submit": true, 00:13:46.972 "transport_retry_count": 4, 00:13:46.972 "bdev_retry_count": 3, 00:13:46.972 "transport_ack_timeout": 0, 00:13:46.972 "ctrlr_loss_timeout_sec": 0, 00:13:46.972 "reconnect_delay_sec": 0, 00:13:46.972 "fast_io_fail_timeout_sec": 0, 00:13:46.972 "disable_auto_failback": false, 00:13:46.972 "generate_uuids": false, 00:13:46.972 "transport_tos": 0, 00:13:46.972 "nvme_error_stat": false, 00:13:46.972 "rdma_srq_size": 0, 00:13:46.972 "io_path_stat": false, 00:13:46.972 "allow_accel_sequence": false, 00:13:46.972 "rdma_max_cq_size": 0, 00:13:46.972 "rdma_cm_event_timeout_ms": 0, 00:13:46.972 "dhchap_digests": [ 00:13:46.972 "sha256", 00:13:46.972 "sha384", 00:13:46.972 "sha512" 00:13:46.972 ], 00:13:46.972 "dhchap_dhgroups": [ 00:13:46.972 "null", 00:13:46.972 "ffdhe2048", 00:13:46.972 "ffdhe3072", 00:13:46.972 "ffdhe4096", 00:13:46.972 "ffdhe6144", 00:13:46.972 "ffdhe8192" 00:13:46.972 ] 00:13:46.972 } 00:13:46.972 }, 00:13:46.972 { 00:13:46.972 "method": "bdev_nvme_attach_controller", 00:13:46.972 "params": { 00:13:46.972 "name": "TLSTEST", 00:13:46.972 "trtype": "TCP", 00:13:46.972 "adrfam": "IPv4", 00:13:46.972 "traddr": "10.0.0.2", 00:13:46.972 "trsvcid": "4420", 00:13:46.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.973 "prchk_reftag": false, 00:13:46.973 "prchk_guard": false, 00:13:46.973 "ctrlr_loss_timeout_sec": 0, 00:13:46.973 "reconnect_delay_sec": 0, 00:13:46.973 "fast_io_fail_timeout_sec": 0, 00:13:46.973 "psk": "/tmp/tmp.PG0CDOhMqD", 00:13:46.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.973 "hdgst": false, 00:13:46.973 "ddgst": false 00:13:46.973 } 00:13:46.973 }, 00:13:46.973 { 00:13:46.973 "method": "bdev_nvme_set_hotplug", 00:13:46.973 "params": { 00:13:46.973 "period_us": 100000, 00:13:46.973 "enable": false 00:13:46.973 } 00:13:46.973 }, 00:13:46.973 { 00:13:46.973 "method": "bdev_wait_for_examine" 00:13:46.973 } 00:13:46.973 ] 00:13:46.973 }, 00:13:46.973 { 00:13:46.973 "subsystem": "nbd", 00:13:46.973 "config": [] 00:13:46.973 } 00:13:46.973 ] 00:13:46.973 }' 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73696 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73696 ']' 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73696 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73696 00:13:46.973 killing process with pid 73696 00:13:46.973 Received shutdown signal, test time was about 10.000000 seconds 00:13:46.973 00:13:46.973 Latency(us) 00:13:46.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.973 =================================================================================================================== 00:13:46.973 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73696' 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73696 00:13:46.973 [2024-07-12 11:37:50.180291] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73696 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73640 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73640 ']' 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73640 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:46.973 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73640 00:13:47.232 killing process with pid 73640 00:13:47.232 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:47.232 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:47.232 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73640' 00:13:47.232 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73640 00:13:47.232 [2024-07-12 11:37:50.438069] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:47.232 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73640 00:13:47.491 11:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:47.491 11:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:47.491 "subsystems": [ 00:13:47.491 { 00:13:47.491 "subsystem": "keyring", 00:13:47.491 "config": [] 00:13:47.491 }, 00:13:47.491 { 00:13:47.491 "subsystem": "iobuf", 00:13:47.491 "config": [ 00:13:47.491 { 00:13:47.491 "method": "iobuf_set_options", 00:13:47.491 "params": { 00:13:47.491 "small_pool_count": 8192, 00:13:47.491 "large_pool_count": 1024, 00:13:47.491 "small_bufsize": 8192, 00:13:47.491 "large_bufsize": 135168 00:13:47.491 } 00:13:47.491 } 00:13:47.491 ] 00:13:47.491 }, 00:13:47.491 { 00:13:47.491 "subsystem": "sock", 00:13:47.491 "config": [ 00:13:47.491 { 00:13:47.491 "method": "sock_set_default_impl", 00:13:47.491 "params": { 00:13:47.491 "impl_name": "uring" 00:13:47.491 } 00:13:47.491 }, 00:13:47.491 { 00:13:47.491 "method": "sock_impl_set_options", 00:13:47.491 "params": { 00:13:47.491 "impl_name": "ssl", 00:13:47.491 "recv_buf_size": 4096, 00:13:47.491 "send_buf_size": 4096, 00:13:47.491 "enable_recv_pipe": true, 00:13:47.491 "enable_quickack": false, 00:13:47.491 "enable_placement_id": 0, 00:13:47.491 "enable_zerocopy_send_server": true, 00:13:47.491 "enable_zerocopy_send_client": false, 00:13:47.491 "zerocopy_threshold": 0, 00:13:47.491 "tls_version": 0, 00:13:47.491 "enable_ktls": false 00:13:47.491 } 00:13:47.491 }, 00:13:47.491 { 00:13:47.491 "method": "sock_impl_set_options", 00:13:47.492 "params": { 00:13:47.492 "impl_name": "posix", 00:13:47.492 "recv_buf_size": 2097152, 00:13:47.492 "send_buf_size": 2097152, 00:13:47.492 "enable_recv_pipe": true, 00:13:47.492 "enable_quickack": false, 00:13:47.492 "enable_placement_id": 0, 00:13:47.492 "enable_zerocopy_send_server": true, 00:13:47.492 "enable_zerocopy_send_client": false, 00:13:47.492 "zerocopy_threshold": 0, 00:13:47.492 "tls_version": 0, 00:13:47.492 "enable_ktls": false 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "sock_impl_set_options", 00:13:47.492 "params": { 00:13:47.492 "impl_name": "uring", 00:13:47.492 "recv_buf_size": 2097152, 00:13:47.492 "send_buf_size": 2097152, 00:13:47.492 "enable_recv_pipe": true, 00:13:47.492 "enable_quickack": false, 00:13:47.492 "enable_placement_id": 0, 00:13:47.492 "enable_zerocopy_send_server": false, 00:13:47.492 "enable_zerocopy_send_client": false, 00:13:47.492 "zerocopy_threshold": 0, 00:13:47.492 "tls_version": 0, 00:13:47.492 "enable_ktls": false 00:13:47.492 } 00:13:47.492 } 00:13:47.492 ] 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "subsystem": "vmd", 00:13:47.492 "config": [] 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "subsystem": "accel", 00:13:47.492 "config": [ 00:13:47.492 { 00:13:47.492 "method": "accel_set_options", 00:13:47.492 "params": { 00:13:47.492 "small_cache_size": 128, 00:13:47.492 "large_cache_size": 16, 00:13:47.492 "task_count": 2048, 00:13:47.492 "sequence_count": 2048, 00:13:47.492 "buf_count": 2048 00:13:47.492 } 00:13:47.492 } 00:13:47.492 ] 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "subsystem": "bdev", 00:13:47.492 "config": [ 00:13:47.492 { 00:13:47.492 "method": "bdev_set_options", 00:13:47.492 "params": { 00:13:47.492 "bdev_io_pool_size": 65535, 00:13:47.492 "bdev_io_cache_size": 256, 00:13:47.492 "bdev_auto_examine": true, 00:13:47.492 "iobuf_small_cache_size": 128, 00:13:47.492 "iobuf_large_cache_size": 16 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "bdev_raid_set_options", 00:13:47.492 "params": { 00:13:47.492 "process_window_size_kb": 1024 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "bdev_iscsi_set_options", 00:13:47.492 "params": { 00:13:47.492 "timeout_sec": 30 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "bdev_nvme_set_options", 00:13:47.492 "params": { 00:13:47.492 "action_on_timeout": "none", 00:13:47.492 "timeout_us": 0, 00:13:47.492 "timeout_admin_us": 0, 00:13:47.492 "keep_alive_timeout_ms": 10000, 00:13:47.492 "arbitration_burst": 0, 00:13:47.492 "low_priority_weight": 0, 00:13:47.492 "medium_priority_weight": 0, 00:13:47.492 "high_priority_weight": 0, 00:13:47.492 "nvme_adminq_poll_period_us": 10000, 00:13:47.492 "nvme_ioq_poll_period_us": 0, 00:13:47.492 "io_queue_requests": 0, 00:13:47.492 "delay_cmd_submit": true, 00:13:47.492 "transport_retry_count": 4, 00:13:47.492 "bdev_retry_count": 3, 00:13:47.492 "transport_ack_timeout": 0, 00:13:47.492 "ctrlr_loss_timeout_sec": 0, 00:13:47.492 "reconnect_delay_sec": 0, 00:13:47.492 "fast_io_fail_timeout_sec": 0, 00:13:47.492 "disable_auto_failback": false, 00:13:47.492 "generate_uuids": false, 00:13:47.492 "transport_tos": 0, 00:13:47.492 "nvme_error_stat": false, 00:13:47.492 "rdma_srq_size": 0, 00:13:47.492 "io_path_stat": false, 00:13:47.492 "allow_accel_sequence": false, 00:13:47.492 "rdma_max_cq_size": 0, 00:13:47.492 "rdma_cm_event_timeout_ms": 0, 00:13:47.492 "dhchap_digests": [ 00:13:47.492 "sha256", 00:13:47.492 "sha384", 00:13:47.492 "sha512" 00:13:47.492 ], 00:13:47.492 "dhchap_dhgroups": [ 00:13:47.492 "null", 00:13:47.492 "ffdhe2048", 00:13:47.492 "ffdhe3072", 00:13:47.492 "ffdhe4096", 00:13:47.492 "ffdhe6144", 00:13:47.492 "ffdhe8192" 00:13:47.492 ] 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "bdev_nvme_set_hotplug", 00:13:47.492 "params": { 00:13:47.492 "period_us": 100000, 00:13:47.492 "enable": false 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "bdev_malloc_create", 00:13:47.492 "params": { 00:13:47.492 "name": "malloc0", 00:13:47.492 "num_blocks": 8192, 00:13:47.492 "block_size": 4096, 00:13:47.492 "physical_block_size": 4096, 00:13:47.492 "uuid": "9b585145-04d3-4ed2-96fc-21324d282db1", 00:13:47.492 "optimal_io_boundary": 0 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "bdev_wait_for_examine" 00:13:47.492 } 00:13:47.492 ] 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "subsystem": "nbd", 00:13:47.492 "config": [] 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "subsystem": "scheduler", 00:13:47.492 "config": [ 00:13:47.492 { 00:13:47.492 "method": "framework_set_scheduler", 00:13:47.492 "params": { 00:13:47.492 "name": "static" 00:13:47.492 } 00:13:47.492 } 00:13:47.492 ] 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "subsystem": "nvmf", 00:13:47.492 "config": [ 00:13:47.492 { 00:13:47.492 "method": "nvmf_set_config", 00:13:47.492 "params": { 00:13:47.492 "discovery_filter": "match_any", 00:13:47.492 "admin_cmd_passthru": { 00:13:47.492 "identify_ctrlr": false 00:13:47.492 } 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "nvmf_set_max_subsystems", 00:13:47.492 "params": { 00:13:47.492 "max_subsystems": 1024 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "nvmf_set_crdt", 00:13:47.492 "params": { 00:13:47.492 "crdt1": 0, 00:13:47.492 "crdt2": 0, 00:13:47.492 "crdt3": 0 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.492 "method": "nvmf_create_transport", 00:13:47.492 "params": { 00:13:47.492 "trtype": "TCP", 00:13:47.492 "max_queue_depth": 128, 00:13:47.492 "max_io_qpairs_per_ctrlr": 127, 00:13:47.492 "in_capsule_data_size": 4096, 00:13:47.492 "max_io_size": 131072, 00:13:47.492 "io_unit_size": 131072, 00:13:47.492 "max_aq_depth": 128, 00:13:47.492 "num_shared_buffers": 511, 00:13:47.492 "buf_cache_size": 4294967295, 00:13:47.492 "dif_insert_or_strip": false, 00:13:47.492 "zcopy": false, 00:13:47.492 "c2h_success": false, 00:13:47.492 "sock_priority": 0, 00:13:47.492 "abort_timeout_sec": 1, 00:13:47.492 "ack_timeout": 0, 00:13:47.492 "data_wr_pool_size": 0 00:13:47.492 } 00:13:47.492 }, 00:13:47.492 { 00:13:47.493 "method": "nvmf_create_subsystem", 00:13:47.493 "params": { 00:13:47.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.493 "allow_any_host": false, 00:13:47.493 "serial_number": "SPDK00000000000001", 00:13:47.493 "model_number": "SPDK bdev Controller", 00:13:47.493 "max_namespaces": 10, 00:13:47.493 "min_cntlid": 1, 00:13:47.493 "max_cntlid": 65519, 00:13:47.493 "ana_reporting": false 00:13:47.493 } 00:13:47.493 }, 00:13:47.493 { 00:13:47.493 "method": "nvmf_subsystem_add_host", 00:13:47.493 "params": { 00:13:47.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.493 "host": "nqn.2016-06.io.spdk:host1", 00:13:47.493 "psk": "/tmp/tmp.PG0CDOhMqD" 00:13:47.493 } 00:13:47.493 }, 00:13:47.493 { 00:13:47.493 "method": "nvmf_subsystem_add_ns", 00:13:47.493 "params": { 00:13:47.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.493 "namespace": { 00:13:47.493 "nsid": 1, 00:13:47.493 "bdev_name": "malloc0", 00:13:47.493 "nguid": "9B58514504D34ED296FC21324D282DB1", 00:13:47.493 "uuid": "9b585145-04d3-4ed2-96fc-21324d282db1", 00:13:47.493 "no_auto_visible": false 00:13:47.493 } 00:13:47.493 } 00:13:47.493 }, 00:13:47.493 { 00:13:47.493 "method": "nvmf_subsystem_add_listener", 00:13:47.493 "params": { 00:13:47.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.493 "listen_address": { 00:13:47.493 "trtype": "TCP", 00:13:47.493 "adrfam": "IPv4", 00:13:47.493 "traddr": "10.0.0.2", 00:13:47.493 "trsvcid": "4420" 00:13:47.493 }, 00:13:47.493 "secure_channel": true 00:13:47.493 } 00:13:47.493 } 00:13:47.493 ] 00:13:47.493 } 00:13:47.493 ] 00:13:47.493 }' 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73745 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73745 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73745 ']' 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.493 11:37:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.493 [2024-07-12 11:37:50.756517] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:47.493 [2024-07-12 11:37:50.756672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.493 [2024-07-12 11:37:50.901230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.752 [2024-07-12 11:37:51.015340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.752 [2024-07-12 11:37:51.015418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.752 [2024-07-12 11:37:51.015431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.752 [2024-07-12 11:37:51.015440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.752 [2024-07-12 11:37:51.015448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.752 [2024-07-12 11:37:51.015548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.752 [2024-07-12 11:37:51.181378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.010 [2024-07-12 11:37:51.251270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.010 [2024-07-12 11:37:51.267201] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:48.010 [2024-07-12 11:37:51.283195] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:48.010 [2024-07-12 11:37:51.283447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.267 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.267 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73777 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73777 /var/tmp/bdevperf.sock 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73777 ']' 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:48.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.268 11:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:48.268 "subsystems": [ 00:13:48.268 { 00:13:48.268 "subsystem": "keyring", 00:13:48.268 "config": [] 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "subsystem": "iobuf", 00:13:48.268 "config": [ 00:13:48.268 { 00:13:48.268 "method": "iobuf_set_options", 00:13:48.268 "params": { 00:13:48.268 "small_pool_count": 8192, 00:13:48.268 "large_pool_count": 1024, 00:13:48.268 "small_bufsize": 8192, 00:13:48.268 "large_bufsize": 135168 00:13:48.268 } 00:13:48.268 } 00:13:48.268 ] 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "subsystem": "sock", 00:13:48.268 "config": [ 00:13:48.268 { 00:13:48.268 "method": "sock_set_default_impl", 00:13:48.268 "params": { 00:13:48.268 "impl_name": "uring" 00:13:48.268 } 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "method": "sock_impl_set_options", 00:13:48.268 "params": { 00:13:48.268 "impl_name": "ssl", 00:13:48.268 "recv_buf_size": 4096, 00:13:48.268 "send_buf_size": 4096, 00:13:48.268 "enable_recv_pipe": true, 00:13:48.268 "enable_quickack": false, 00:13:48.268 "enable_placement_id": 0, 00:13:48.268 "enable_zerocopy_send_server": true, 00:13:48.268 "enable_zerocopy_send_client": false, 00:13:48.268 "zerocopy_threshold": 0, 00:13:48.268 "tls_version": 0, 00:13:48.268 "enable_ktls": false 00:13:48.268 } 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "method": "sock_impl_set_options", 00:13:48.268 "params": { 00:13:48.268 "impl_name": "posix", 00:13:48.268 "recv_buf_size": 2097152, 00:13:48.268 "send_buf_size": 2097152, 00:13:48.268 "enable_recv_pipe": true, 00:13:48.268 "enable_quickack": false, 00:13:48.268 "enable_placement_id": 0, 00:13:48.268 "enable_zerocopy_send_server": true, 00:13:48.268 "enable_zerocopy_send_client": false, 00:13:48.268 "zerocopy_threshold": 0, 00:13:48.268 "tls_version": 0, 00:13:48.268 "enable_ktls": false 00:13:48.268 } 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "method": "sock_impl_set_options", 00:13:48.268 "params": { 00:13:48.268 "impl_name": "uring", 00:13:48.268 "recv_buf_size": 2097152, 00:13:48.268 "send_buf_size": 2097152, 00:13:48.268 "enable_recv_pipe": true, 00:13:48.268 "enable_quickack": false, 00:13:48.268 "enable_placement_id": 0, 00:13:48.268 "enable_zerocopy_send_server": false, 00:13:48.268 "enable_zerocopy_send_client": false, 00:13:48.268 "zerocopy_threshold": 0, 00:13:48.268 "tls_version": 0, 00:13:48.268 "enable_ktls": false 00:13:48.268 } 00:13:48.268 } 00:13:48.268 ] 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "subsystem": "vmd", 00:13:48.268 "config": [] 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "subsystem": "accel", 00:13:48.268 "config": [ 00:13:48.268 { 00:13:48.268 "method": "accel_set_options", 00:13:48.268 "params": { 00:13:48.268 "small_cache_size": 128, 00:13:48.268 "large_cache_size": 16, 00:13:48.268 "task_count": 2048, 00:13:48.268 "sequence_count": 2048, 00:13:48.268 "buf_count": 2048 00:13:48.268 } 00:13:48.268 } 00:13:48.268 ] 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "subsystem": "bdev", 00:13:48.268 "config": [ 00:13:48.268 { 00:13:48.268 "method": "bdev_set_options", 00:13:48.268 "params": { 00:13:48.268 "bdev_io_pool_size": 65535, 00:13:48.268 "bdev_io_cache_size": 256, 00:13:48.268 "bdev_auto_examine": true, 00:13:48.268 "iobuf_small_cache_size": 128, 00:13:48.268 "iobuf_large_cache_size": 16 00:13:48.268 } 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "method": "bdev_raid_set_options", 00:13:48.268 "params": { 00:13:48.268 "process_window_size_kb": 1024 00:13:48.268 } 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "method": "bdev_iscsi_set_options", 00:13:48.268 "params": { 00:13:48.268 "timeout_sec": 30 00:13:48.268 } 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "method": "bdev_nvme_set_options", 00:13:48.268 "params": { 00:13:48.268 "action_on_timeout": "none", 00:13:48.268 "timeout_us": 0, 00:13:48.268 "timeout_admin_us": 0, 00:13:48.268 "keep_alive_timeout_ms": 10000, 00:13:48.268 "arbitration_burst": 0, 00:13:48.268 "low_priority_weight": 0, 00:13:48.268 "medium_priority_weight": 0, 00:13:48.268 "high_priority_weight": 0, 00:13:48.268 "nvme_adminq_poll_period_us": 10000, 00:13:48.268 "nvme_ioq_poll_period_us": 0, 00:13:48.268 "io_queue_requests": 512, 00:13:48.268 "delay_cmd_submit": true, 00:13:48.268 "transport_retry_count": 4, 00:13:48.268 "bdev_retry_count": 3, 00:13:48.268 "transport_ack_timeout": 0, 00:13:48.268 "ctrlr_loss_timeout_sec": 0, 00:13:48.268 "reconnect_delay_sec": 0, 00:13:48.268 "fast_io_fail_timeout_sec": 0, 00:13:48.268 "disable_auto_failback": false, 00:13:48.268 "generate_uuids": false, 00:13:48.268 "transport_tos": 0, 00:13:48.268 "nvme_error_stat": false, 00:13:48.268 "rdma_srq_size": 0, 00:13:48.268 "io_path_stat": false, 00:13:48.268 "allow_accel_sequence": false, 00:13:48.268 "rdma_max_cq_size": 0, 00:13:48.268 "rdma_cm_event_timeout_ms": 0, 00:13:48.268 "dhchap_digests": [ 00:13:48.268 "sha256", 00:13:48.268 "sha384", 00:13:48.268 "sha512" 00:13:48.268 ], 00:13:48.268 "dhchap_dhgroups": [ 00:13:48.268 "null", 00:13:48.268 "ffdhe2048", 00:13:48.268 "ffdhe3072", 00:13:48.268 "ffdhe4096", 00:13:48.268 "ffdhe6144", 00:13:48.268 "ffdhe8192" 00:13:48.268 ] 00:13:48.268 } 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "method": "bdev_nvme_attach_controller", 00:13:48.268 "params": { 00:13:48.268 "name": "TLSTEST", 00:13:48.268 "trtype": "TCP", 00:13:48.268 "adrfam": "IPv4", 00:13:48.268 "traddr": "10.0.0.2", 00:13:48.268 "trsvcid": "4420", 00:13:48.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.268 "prchk_reftag": false, 00:13:48.268 "prchk_guard": false, 00:13:48.268 "ctrlr_loss_timeout_sec": 0, 00:13:48.268 "reconnect_delay_sec": 0, 00:13:48.268 "fast_io_fail_timeout_sec": 0, 00:13:48.268 "psk": "/tmp/tmp.PG0CDOhMqD", 00:13:48.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:48.268 "hdgst": false, 00:13:48.268 "ddgst": false 00:13:48.268 } 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "method": "bdev_nvme_set_hotplug", 00:13:48.268 "params": { 00:13:48.268 "period_us": 100000, 00:13:48.268 "enable": false 00:13:48.268 } 00:13:48.268 }, 00:13:48.268 { 00:13:48.268 "method": "bdev_wait_for_examine" 00:13:48.268 } 00:13:48.268 ] 00:13:48.269 }, 00:13:48.269 { 00:13:48.269 "subsystem": "nbd", 00:13:48.269 "config": [] 00:13:48.269 } 00:13:48.269 ] 00:13:48.269 }' 00:13:48.526 [2024-07-12 11:37:51.754493] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:13:48.526 [2024-07-12 11:37:51.754611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73777 ] 00:13:48.526 [2024-07-12 11:37:51.893344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.785 [2024-07-12 11:37:52.011118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.785 [2024-07-12 11:37:52.147475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.785 [2024-07-12 11:37:52.189018] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:48.786 [2024-07-12 11:37:52.189141] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:49.721 11:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.721 11:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:49.721 11:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:49.721 Running I/O for 10 seconds... 00:13:59.690 00:13:59.690 Latency(us) 00:13:59.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.690 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:59.690 Verification LBA range: start 0x0 length 0x2000 00:13:59.690 TLSTESTn1 : 10.02 3986.81 15.57 0.00 0.00 32041.32 7208.96 35508.60 00:13:59.690 =================================================================================================================== 00:13:59.690 Total : 3986.81 15.57 0.00 0.00 32041.32 7208.96 35508.60 00:13:59.690 0 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73777 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73777 ']' 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73777 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73777 00:13:59.690 killing process with pid 73777 00:13:59.690 Received shutdown signal, test time was about 10.000000 seconds 00:13:59.690 00:13:59.690 Latency(us) 00:13:59.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.690 =================================================================================================================== 00:13:59.690 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73777' 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73777 00:13:59.690 [2024-07-12 11:38:02.991592] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:59.690 11:38:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73777 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73745 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73745 ']' 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73745 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73745 00:13:59.947 killing process with pid 73745 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73745' 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73745 00:13:59.947 [2024-07-12 11:38:03.253130] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:59.947 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73745 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73910 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73910 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73910 ']' 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.206 11:38:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.206 [2024-07-12 11:38:03.554181] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:00.206 [2024-07-12 11:38:03.554290] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.464 [2024-07-12 11:38:03.694401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.464 [2024-07-12 11:38:03.808365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.464 [2024-07-12 11:38:03.808441] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.464 [2024-07-12 11:38:03.808460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.464 [2024-07-12 11:38:03.808474] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.464 [2024-07-12 11:38:03.808487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.464 [2024-07-12 11:38:03.808523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.464 [2024-07-12 11:38:03.862164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.032 11:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.032 11:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:01.032 11:38:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.032 11:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:01.032 11:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.291 11:38:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.291 11:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.PG0CDOhMqD 00:14:01.291 11:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PG0CDOhMqD 00:14:01.291 11:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:01.291 [2024-07-12 11:38:04.724880] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.550 11:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:01.809 11:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:01.809 [2024-07-12 11:38:05.249025] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:01.809 [2024-07-12 11:38:05.249322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.067 11:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:02.326 malloc0 00:14:02.326 11:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:02.584 11:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PG0CDOhMqD 00:14:02.843 [2024-07-12 11:38:06.076981] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73969 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73969 /var/tmp/bdevperf.sock 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73969 ']' 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.843 11:38:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.843 [2024-07-12 11:38:06.152695] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:02.843 [2024-07-12 11:38:06.152978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73969 ] 00:14:03.102 [2024-07-12 11:38:06.295920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.102 [2024-07-12 11:38:06.395130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.102 [2024-07-12 11:38:06.450065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:03.686 11:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.686 11:38:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:03.686 11:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PG0CDOhMqD 00:14:03.944 11:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:04.203 [2024-07-12 11:38:07.583273] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:04.462 nvme0n1 00:14:04.462 11:38:07 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:04.462 Running I/O for 1 seconds... 00:14:05.397 00:14:05.397 Latency(us) 00:14:05.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.397 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:05.397 Verification LBA range: start 0x0 length 0x2000 00:14:05.397 nvme0n1 : 1.03 3975.64 15.53 0.00 0.00 31842.71 9532.51 21805.61 00:14:05.397 =================================================================================================================== 00:14:05.397 Total : 3975.64 15.53 0.00 0.00 31842.71 9532.51 21805.61 00:14:05.397 0 00:14:05.397 11:38:08 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73969 00:14:05.397 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73969 ']' 00:14:05.397 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73969 00:14:05.397 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:05.397 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.397 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73969 00:14:05.656 killing process with pid 73969 00:14:05.656 Received shutdown signal, test time was about 1.000000 seconds 00:14:05.656 00:14:05.656 Latency(us) 00:14:05.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.656 =================================================================================================================== 00:14:05.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:05.656 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:05.656 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:05.656 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73969' 00:14:05.656 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73969 00:14:05.656 11:38:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73969 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73910 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73910 ']' 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73910 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73910 00:14:05.656 killing process with pid 73910 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73910' 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73910 00:14:05.656 [2024-07-12 11:38:09.100250] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:05.656 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73910 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74017 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74017 00:14:05.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74017 ']' 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.914 11:38:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.172 [2024-07-12 11:38:09.391804] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:06.172 [2024-07-12 11:38:09.391888] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.172 [2024-07-12 11:38:09.528719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.430 [2024-07-12 11:38:09.637413] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.430 [2024-07-12 11:38:09.637470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.430 [2024-07-12 11:38:09.637499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.430 [2024-07-12 11:38:09.637521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.430 [2024-07-12 11:38:09.637528] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.430 [2024-07-12 11:38:09.637558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.430 [2024-07-12 11:38:09.692094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:06.995 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.995 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:06.995 11:38:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.995 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.995 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.995 11:38:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.995 11:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:06.995 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.995 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.995 [2024-07-12 11:38:10.383169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.995 malloc0 00:14:06.995 [2024-07-12 11:38:10.414096] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:06.995 [2024-07-12 11:38:10.414311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74049 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74049 /var/tmp/bdevperf.sock 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74049 ']' 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.254 11:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.254 [2024-07-12 11:38:10.507134] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:07.254 [2024-07-12 11:38:10.507502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74049 ] 00:14:07.254 [2024-07-12 11:38:10.647805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.511 [2024-07-12 11:38:10.753683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.511 [2024-07-12 11:38:10.806886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:08.077 11:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.077 11:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:08.077 11:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PG0CDOhMqD 00:14:08.336 11:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:08.901 [2024-07-12 11:38:12.043207] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.901 nvme0n1 00:14:08.901 11:38:12 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:08.901 Running I/O for 1 seconds... 00:14:09.832 00:14:09.832 Latency(us) 00:14:09.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.832 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:09.832 Verification LBA range: start 0x0 length 0x2000 00:14:09.832 nvme0n1 : 1.02 3904.96 15.25 0.00 0.00 32366.53 1266.04 22758.87 00:14:09.832 =================================================================================================================== 00:14:09.832 Total : 3904.96 15.25 0.00 0.00 32366.53 1266.04 22758.87 00:14:09.832 0 00:14:09.832 11:38:13 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:09.832 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.832 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.131 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.131 11:38:13 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:10.131 "subsystems": [ 00:14:10.131 { 00:14:10.131 "subsystem": "keyring", 00:14:10.131 "config": [ 00:14:10.131 { 00:14:10.131 "method": "keyring_file_add_key", 00:14:10.131 "params": { 00:14:10.131 "name": "key0", 00:14:10.131 "path": "/tmp/tmp.PG0CDOhMqD" 00:14:10.131 } 00:14:10.131 } 00:14:10.131 ] 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "subsystem": "iobuf", 00:14:10.131 "config": [ 00:14:10.131 { 00:14:10.131 "method": "iobuf_set_options", 00:14:10.131 "params": { 00:14:10.131 "small_pool_count": 8192, 00:14:10.131 "large_pool_count": 1024, 00:14:10.131 "small_bufsize": 8192, 00:14:10.131 "large_bufsize": 135168 00:14:10.131 } 00:14:10.131 } 00:14:10.131 ] 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "subsystem": "sock", 00:14:10.131 "config": [ 00:14:10.131 { 00:14:10.131 "method": "sock_set_default_impl", 00:14:10.131 "params": { 00:14:10.131 "impl_name": "uring" 00:14:10.131 } 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "method": "sock_impl_set_options", 00:14:10.131 "params": { 00:14:10.131 "impl_name": "ssl", 00:14:10.131 "recv_buf_size": 4096, 00:14:10.131 "send_buf_size": 4096, 00:14:10.131 "enable_recv_pipe": true, 00:14:10.131 "enable_quickack": false, 00:14:10.131 "enable_placement_id": 0, 00:14:10.131 "enable_zerocopy_send_server": true, 00:14:10.131 "enable_zerocopy_send_client": false, 00:14:10.131 "zerocopy_threshold": 0, 00:14:10.131 "tls_version": 0, 00:14:10.131 "enable_ktls": false 00:14:10.131 } 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "method": "sock_impl_set_options", 00:14:10.131 "params": { 00:14:10.131 "impl_name": "posix", 00:14:10.131 "recv_buf_size": 2097152, 00:14:10.131 "send_buf_size": 2097152, 00:14:10.131 "enable_recv_pipe": true, 00:14:10.131 "enable_quickack": false, 00:14:10.131 "enable_placement_id": 0, 00:14:10.131 "enable_zerocopy_send_server": true, 00:14:10.131 "enable_zerocopy_send_client": false, 00:14:10.131 "zerocopy_threshold": 0, 00:14:10.131 "tls_version": 0, 00:14:10.131 "enable_ktls": false 00:14:10.131 } 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "method": "sock_impl_set_options", 00:14:10.131 "params": { 00:14:10.131 "impl_name": "uring", 00:14:10.131 "recv_buf_size": 2097152, 00:14:10.131 "send_buf_size": 2097152, 00:14:10.131 "enable_recv_pipe": true, 00:14:10.131 "enable_quickack": false, 00:14:10.131 "enable_placement_id": 0, 00:14:10.131 "enable_zerocopy_send_server": false, 00:14:10.131 "enable_zerocopy_send_client": false, 00:14:10.131 "zerocopy_threshold": 0, 00:14:10.131 "tls_version": 0, 00:14:10.131 "enable_ktls": false 00:14:10.131 } 00:14:10.131 } 00:14:10.131 ] 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "subsystem": "vmd", 00:14:10.131 "config": [] 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "subsystem": "accel", 00:14:10.131 "config": [ 00:14:10.131 { 00:14:10.131 "method": "accel_set_options", 00:14:10.131 "params": { 00:14:10.131 "small_cache_size": 128, 00:14:10.131 "large_cache_size": 16, 00:14:10.131 "task_count": 2048, 00:14:10.131 "sequence_count": 2048, 00:14:10.131 "buf_count": 2048 00:14:10.131 } 00:14:10.131 } 00:14:10.131 ] 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "subsystem": "bdev", 00:14:10.131 "config": [ 00:14:10.131 { 00:14:10.131 "method": "bdev_set_options", 00:14:10.131 "params": { 00:14:10.131 "bdev_io_pool_size": 65535, 00:14:10.131 "bdev_io_cache_size": 256, 00:14:10.131 "bdev_auto_examine": true, 00:14:10.131 "iobuf_small_cache_size": 128, 00:14:10.131 "iobuf_large_cache_size": 16 00:14:10.131 } 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "method": "bdev_raid_set_options", 00:14:10.131 "params": { 00:14:10.131 "process_window_size_kb": 1024 00:14:10.131 } 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "method": "bdev_iscsi_set_options", 00:14:10.131 "params": { 00:14:10.131 "timeout_sec": 30 00:14:10.131 } 00:14:10.131 }, 00:14:10.131 { 00:14:10.131 "method": "bdev_nvme_set_options", 00:14:10.131 "params": { 00:14:10.131 "action_on_timeout": "none", 00:14:10.131 "timeout_us": 0, 00:14:10.131 "timeout_admin_us": 0, 00:14:10.131 "keep_alive_timeout_ms": 10000, 00:14:10.131 "arbitration_burst": 0, 00:14:10.131 "low_priority_weight": 0, 00:14:10.131 "medium_priority_weight": 0, 00:14:10.131 "high_priority_weight": 0, 00:14:10.131 "nvme_adminq_poll_period_us": 10000, 00:14:10.131 "nvme_ioq_poll_period_us": 0, 00:14:10.131 "io_queue_requests": 0, 00:14:10.131 "delay_cmd_submit": true, 00:14:10.131 "transport_retry_count": 4, 00:14:10.131 "bdev_retry_count": 3, 00:14:10.131 "transport_ack_timeout": 0, 00:14:10.131 "ctrlr_loss_timeout_sec": 0, 00:14:10.131 "reconnect_delay_sec": 0, 00:14:10.131 "fast_io_fail_timeout_sec": 0, 00:14:10.131 "disable_auto_failback": false, 00:14:10.131 "generate_uuids": false, 00:14:10.131 "transport_tos": 0, 00:14:10.131 "nvme_error_stat": false, 00:14:10.131 "rdma_srq_size": 0, 00:14:10.131 "io_path_stat": false, 00:14:10.131 "allow_accel_sequence": false, 00:14:10.131 "rdma_max_cq_size": 0, 00:14:10.132 "rdma_cm_event_timeout_ms": 0, 00:14:10.132 "dhchap_digests": [ 00:14:10.132 "sha256", 00:14:10.132 "sha384", 00:14:10.132 "sha512" 00:14:10.132 ], 00:14:10.132 "dhchap_dhgroups": [ 00:14:10.132 "null", 00:14:10.132 "ffdhe2048", 00:14:10.132 "ffdhe3072", 00:14:10.132 "ffdhe4096", 00:14:10.132 "ffdhe6144", 00:14:10.132 "ffdhe8192" 00:14:10.132 ] 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "bdev_nvme_set_hotplug", 00:14:10.132 "params": { 00:14:10.132 "period_us": 100000, 00:14:10.132 "enable": false 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "bdev_malloc_create", 00:14:10.132 "params": { 00:14:10.132 "name": "malloc0", 00:14:10.132 "num_blocks": 8192, 00:14:10.132 "block_size": 4096, 00:14:10.132 "physical_block_size": 4096, 00:14:10.132 "uuid": "21c9798c-4a91-4323-beab-dd687b6d98ff", 00:14:10.132 "optimal_io_boundary": 0 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "bdev_wait_for_examine" 00:14:10.132 } 00:14:10.132 ] 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "subsystem": "nbd", 00:14:10.132 "config": [] 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "subsystem": "scheduler", 00:14:10.132 "config": [ 00:14:10.132 { 00:14:10.132 "method": "framework_set_scheduler", 00:14:10.132 "params": { 00:14:10.132 "name": "static" 00:14:10.132 } 00:14:10.132 } 00:14:10.132 ] 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "subsystem": "nvmf", 00:14:10.132 "config": [ 00:14:10.132 { 00:14:10.132 "method": "nvmf_set_config", 00:14:10.132 "params": { 00:14:10.132 "discovery_filter": "match_any", 00:14:10.132 "admin_cmd_passthru": { 00:14:10.132 "identify_ctrlr": false 00:14:10.132 } 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "nvmf_set_max_subsystems", 00:14:10.132 "params": { 00:14:10.132 "max_subsystems": 1024 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "nvmf_set_crdt", 00:14:10.132 "params": { 00:14:10.132 "crdt1": 0, 00:14:10.132 "crdt2": 0, 00:14:10.132 "crdt3": 0 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "nvmf_create_transport", 00:14:10.132 "params": { 00:14:10.132 "trtype": "TCP", 00:14:10.132 "max_queue_depth": 128, 00:14:10.132 "max_io_qpairs_per_ctrlr": 127, 00:14:10.132 "in_capsule_data_size": 4096, 00:14:10.132 "max_io_size": 131072, 00:14:10.132 "io_unit_size": 131072, 00:14:10.132 "max_aq_depth": 128, 00:14:10.132 "num_shared_buffers": 511, 00:14:10.132 "buf_cache_size": 4294967295, 00:14:10.132 "dif_insert_or_strip": false, 00:14:10.132 "zcopy": false, 00:14:10.132 "c2h_success": false, 00:14:10.132 "sock_priority": 0, 00:14:10.132 "abort_timeout_sec": 1, 00:14:10.132 "ack_timeout": 0, 00:14:10.132 "data_wr_pool_size": 0 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "nvmf_create_subsystem", 00:14:10.132 "params": { 00:14:10.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.132 "allow_any_host": false, 00:14:10.132 "serial_number": "00000000000000000000", 00:14:10.132 "model_number": "SPDK bdev Controller", 00:14:10.132 "max_namespaces": 32, 00:14:10.132 "min_cntlid": 1, 00:14:10.132 "max_cntlid": 65519, 00:14:10.132 "ana_reporting": false 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "nvmf_subsystem_add_host", 00:14:10.132 "params": { 00:14:10.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.132 "host": "nqn.2016-06.io.spdk:host1", 00:14:10.132 "psk": "key0" 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "nvmf_subsystem_add_ns", 00:14:10.132 "params": { 00:14:10.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.132 "namespace": { 00:14:10.132 "nsid": 1, 00:14:10.132 "bdev_name": "malloc0", 00:14:10.132 "nguid": "21C9798C4A914323BEABDD687B6D98FF", 00:14:10.132 "uuid": "21c9798c-4a91-4323-beab-dd687b6d98ff", 00:14:10.132 "no_auto_visible": false 00:14:10.132 } 00:14:10.132 } 00:14:10.132 }, 00:14:10.132 { 00:14:10.132 "method": "nvmf_subsystem_add_listener", 00:14:10.132 "params": { 00:14:10.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.132 "listen_address": { 00:14:10.132 "trtype": "TCP", 00:14:10.132 "adrfam": "IPv4", 00:14:10.132 "traddr": "10.0.0.2", 00:14:10.132 "trsvcid": "4420" 00:14:10.132 }, 00:14:10.132 "secure_channel": true 00:14:10.132 } 00:14:10.132 } 00:14:10.132 ] 00:14:10.132 } 00:14:10.132 ] 00:14:10.132 }' 00:14:10.132 11:38:13 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:10.392 11:38:13 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:10.392 "subsystems": [ 00:14:10.392 { 00:14:10.392 "subsystem": "keyring", 00:14:10.392 "config": [ 00:14:10.392 { 00:14:10.392 "method": "keyring_file_add_key", 00:14:10.392 "params": { 00:14:10.392 "name": "key0", 00:14:10.392 "path": "/tmp/tmp.PG0CDOhMqD" 00:14:10.392 } 00:14:10.392 } 00:14:10.392 ] 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "subsystem": "iobuf", 00:14:10.392 "config": [ 00:14:10.392 { 00:14:10.392 "method": "iobuf_set_options", 00:14:10.392 "params": { 00:14:10.392 "small_pool_count": 8192, 00:14:10.392 "large_pool_count": 1024, 00:14:10.392 "small_bufsize": 8192, 00:14:10.392 "large_bufsize": 135168 00:14:10.392 } 00:14:10.392 } 00:14:10.392 ] 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "subsystem": "sock", 00:14:10.392 "config": [ 00:14:10.392 { 00:14:10.392 "method": "sock_set_default_impl", 00:14:10.392 "params": { 00:14:10.392 "impl_name": "uring" 00:14:10.392 } 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "method": "sock_impl_set_options", 00:14:10.392 "params": { 00:14:10.392 "impl_name": "ssl", 00:14:10.392 "recv_buf_size": 4096, 00:14:10.392 "send_buf_size": 4096, 00:14:10.392 "enable_recv_pipe": true, 00:14:10.392 "enable_quickack": false, 00:14:10.392 "enable_placement_id": 0, 00:14:10.392 "enable_zerocopy_send_server": true, 00:14:10.392 "enable_zerocopy_send_client": false, 00:14:10.392 "zerocopy_threshold": 0, 00:14:10.392 "tls_version": 0, 00:14:10.392 "enable_ktls": false 00:14:10.392 } 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "method": "sock_impl_set_options", 00:14:10.392 "params": { 00:14:10.392 "impl_name": "posix", 00:14:10.392 "recv_buf_size": 2097152, 00:14:10.392 "send_buf_size": 2097152, 00:14:10.392 "enable_recv_pipe": true, 00:14:10.392 "enable_quickack": false, 00:14:10.392 "enable_placement_id": 0, 00:14:10.392 "enable_zerocopy_send_server": true, 00:14:10.392 "enable_zerocopy_send_client": false, 00:14:10.392 "zerocopy_threshold": 0, 00:14:10.392 "tls_version": 0, 00:14:10.392 "enable_ktls": false 00:14:10.392 } 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "method": "sock_impl_set_options", 00:14:10.392 "params": { 00:14:10.392 "impl_name": "uring", 00:14:10.392 "recv_buf_size": 2097152, 00:14:10.392 "send_buf_size": 2097152, 00:14:10.392 "enable_recv_pipe": true, 00:14:10.392 "enable_quickack": false, 00:14:10.392 "enable_placement_id": 0, 00:14:10.392 "enable_zerocopy_send_server": false, 00:14:10.392 "enable_zerocopy_send_client": false, 00:14:10.392 "zerocopy_threshold": 0, 00:14:10.392 "tls_version": 0, 00:14:10.392 "enable_ktls": false 00:14:10.392 } 00:14:10.392 } 00:14:10.392 ] 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "subsystem": "vmd", 00:14:10.392 "config": [] 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "subsystem": "accel", 00:14:10.392 "config": [ 00:14:10.392 { 00:14:10.392 "method": "accel_set_options", 00:14:10.392 "params": { 00:14:10.392 "small_cache_size": 128, 00:14:10.392 "large_cache_size": 16, 00:14:10.392 "task_count": 2048, 00:14:10.392 "sequence_count": 2048, 00:14:10.392 "buf_count": 2048 00:14:10.392 } 00:14:10.392 } 00:14:10.392 ] 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "subsystem": "bdev", 00:14:10.392 "config": [ 00:14:10.392 { 00:14:10.392 "method": "bdev_set_options", 00:14:10.392 "params": { 00:14:10.392 "bdev_io_pool_size": 65535, 00:14:10.392 "bdev_io_cache_size": 256, 00:14:10.392 "bdev_auto_examine": true, 00:14:10.392 "iobuf_small_cache_size": 128, 00:14:10.392 "iobuf_large_cache_size": 16 00:14:10.392 } 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "method": "bdev_raid_set_options", 00:14:10.392 "params": { 00:14:10.392 "process_window_size_kb": 1024 00:14:10.392 } 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "method": "bdev_iscsi_set_options", 00:14:10.392 "params": { 00:14:10.392 "timeout_sec": 30 00:14:10.392 } 00:14:10.392 }, 00:14:10.392 { 00:14:10.392 "method": "bdev_nvme_set_options", 00:14:10.392 "params": { 00:14:10.392 "action_on_timeout": "none", 00:14:10.392 "timeout_us": 0, 00:14:10.392 "timeout_admin_us": 0, 00:14:10.392 "keep_alive_timeout_ms": 10000, 00:14:10.392 "arbitration_burst": 0, 00:14:10.392 "low_priority_weight": 0, 00:14:10.392 "medium_priority_weight": 0, 00:14:10.392 "high_priority_weight": 0, 00:14:10.392 "nvme_adminq_poll_period_us": 10000, 00:14:10.392 "nvme_ioq_poll_period_us": 0, 00:14:10.392 "io_queue_requests": 512, 00:14:10.392 "delay_cmd_submit": true, 00:14:10.392 "transport_retry_count": 4, 00:14:10.392 "bdev_retry_count": 3, 00:14:10.392 "transport_ack_timeout": 0, 00:14:10.392 "ctrlr_loss_timeout_sec": 0, 00:14:10.392 "reconnect_delay_sec": 0, 00:14:10.392 "fast_io_fail_timeout_sec": 0, 00:14:10.392 "disable_auto_failback": false, 00:14:10.392 "generate_uuids": false, 00:14:10.392 "transport_tos": 0, 00:14:10.392 "nvme_error_stat": false, 00:14:10.392 "rdma_srq_size": 0, 00:14:10.392 "io_path_stat": false, 00:14:10.392 "allow_accel_sequence": false, 00:14:10.392 "rdma_max_cq_size": 0, 00:14:10.392 "rdma_cm_event_timeout_ms": 0, 00:14:10.392 "dhchap_digests": [ 00:14:10.392 "sha256", 00:14:10.392 "sha384", 00:14:10.392 "sha512" 00:14:10.392 ], 00:14:10.392 "dhchap_dhgroups": [ 00:14:10.392 "null", 00:14:10.392 "ffdhe2048", 00:14:10.392 "ffdhe3072", 00:14:10.392 "ffdhe4096", 00:14:10.392 "ffdhe6144", 00:14:10.392 "ffdhe8192" 00:14:10.392 ] 00:14:10.393 } 00:14:10.393 }, 00:14:10.393 { 00:14:10.393 "method": "bdev_nvme_attach_controller", 00:14:10.393 "params": { 00:14:10.393 "name": "nvme0", 00:14:10.393 "trtype": "TCP", 00:14:10.393 "adrfam": "IPv4", 00:14:10.393 "traddr": "10.0.0.2", 00:14:10.393 "trsvcid": "4420", 00:14:10.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.393 "prchk_reftag": false, 00:14:10.393 "prchk_guard": false, 00:14:10.393 "ctrlr_loss_timeout_sec": 0, 00:14:10.393 "reconnect_delay_sec": 0, 00:14:10.393 "fast_io_fail_timeout_sec": 0, 00:14:10.393 "psk": "key0", 00:14:10.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.393 "hdgst": false, 00:14:10.393 "ddgst": false 00:14:10.393 } 00:14:10.393 }, 00:14:10.393 { 00:14:10.393 "method": "bdev_nvme_set_hotplug", 00:14:10.393 "params": { 00:14:10.393 "period_us": 100000, 00:14:10.393 "enable": false 00:14:10.393 } 00:14:10.393 }, 00:14:10.393 { 00:14:10.393 "method": "bdev_enable_histogram", 00:14:10.393 "params": { 00:14:10.393 "name": "nvme0n1", 00:14:10.393 "enable": true 00:14:10.393 } 00:14:10.393 }, 00:14:10.393 { 00:14:10.393 "method": "bdev_wait_for_examine" 00:14:10.393 } 00:14:10.393 ] 00:14:10.393 }, 00:14:10.393 { 00:14:10.393 "subsystem": "nbd", 00:14:10.393 "config": [] 00:14:10.393 } 00:14:10.393 ] 00:14:10.393 }' 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74049 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74049 ']' 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74049 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74049 00:14:10.393 killing process with pid 74049 00:14:10.393 Received shutdown signal, test time was about 1.000000 seconds 00:14:10.393 00:14:10.393 Latency(us) 00:14:10.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.393 =================================================================================================================== 00:14:10.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74049' 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74049 00:14:10.393 11:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74049 00:14:10.651 11:38:14 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74017 00:14:10.651 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74017 ']' 00:14:10.651 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74017 00:14:10.652 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:10.652 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.652 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74017 00:14:10.652 killing process with pid 74017 00:14:10.652 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:10.652 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:10.652 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74017' 00:14:10.652 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74017 00:14:10.652 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74017 00:14:10.911 11:38:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:10.911 11:38:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.911 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:10.911 11:38:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:10.911 "subsystems": [ 00:14:10.911 { 00:14:10.911 "subsystem": "keyring", 00:14:10.911 "config": [ 00:14:10.911 { 00:14:10.911 "method": "keyring_file_add_key", 00:14:10.911 "params": { 00:14:10.911 "name": "key0", 00:14:10.911 "path": "/tmp/tmp.PG0CDOhMqD" 00:14:10.911 } 00:14:10.911 } 00:14:10.911 ] 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "subsystem": "iobuf", 00:14:10.911 "config": [ 00:14:10.911 { 00:14:10.911 "method": "iobuf_set_options", 00:14:10.911 "params": { 00:14:10.911 "small_pool_count": 8192, 00:14:10.911 "large_pool_count": 1024, 00:14:10.911 "small_bufsize": 8192, 00:14:10.911 "large_bufsize": 135168 00:14:10.911 } 00:14:10.911 } 00:14:10.911 ] 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "subsystem": "sock", 00:14:10.911 "config": [ 00:14:10.911 { 00:14:10.911 "method": "sock_set_default_impl", 00:14:10.911 "params": { 00:14:10.911 "impl_name": "uring" 00:14:10.911 } 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "method": "sock_impl_set_options", 00:14:10.911 "params": { 00:14:10.911 "impl_name": "ssl", 00:14:10.911 "recv_buf_size": 4096, 00:14:10.911 "send_buf_size": 4096, 00:14:10.911 "enable_recv_pipe": true, 00:14:10.911 "enable_quickack": false, 00:14:10.911 "enable_placement_id": 0, 00:14:10.911 "enable_zerocopy_send_server": true, 00:14:10.911 "enable_zerocopy_send_client": false, 00:14:10.911 "zerocopy_threshold": 0, 00:14:10.911 "tls_version": 0, 00:14:10.911 "enable_ktls": false 00:14:10.911 } 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "method": "sock_impl_set_options", 00:14:10.911 "params": { 00:14:10.911 "impl_name": "posix", 00:14:10.911 "recv_buf_size": 2097152, 00:14:10.911 "send_buf_size": 2097152, 00:14:10.911 "enable_recv_pipe": true, 00:14:10.911 "enable_quickack": false, 00:14:10.911 "enable_placement_id": 0, 00:14:10.911 "enable_zerocopy_send_server": true, 00:14:10.911 "enable_zerocopy_send_client": false, 00:14:10.911 "zerocopy_threshold": 0, 00:14:10.911 "tls_version": 0, 00:14:10.911 "enable_ktls": false 00:14:10.911 } 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "method": "sock_impl_set_options", 00:14:10.911 "params": { 00:14:10.911 "impl_name": "uring", 00:14:10.911 "recv_buf_size": 2097152, 00:14:10.911 "send_buf_size": 2097152, 00:14:10.911 "enable_recv_pipe": true, 00:14:10.911 "enable_quickack": false, 00:14:10.911 "enable_placement_id": 0, 00:14:10.911 "enable_zerocopy_send_server": false, 00:14:10.911 "enable_zerocopy_send_client": false, 00:14:10.911 "zerocopy_threshold": 0, 00:14:10.911 "tls_version": 0, 00:14:10.911 "enable_ktls": false 00:14:10.911 } 00:14:10.911 } 00:14:10.911 ] 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "subsystem": "vmd", 00:14:10.911 "config": [] 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "subsystem": "accel", 00:14:10.911 "config": [ 00:14:10.911 { 00:14:10.911 "method": "accel_set_options", 00:14:10.911 "params": { 00:14:10.911 "small_cache_size": 128, 00:14:10.911 "large_cache_size": 16, 00:14:10.911 "task_count": 2048, 00:14:10.911 "sequence_count": 2048, 00:14:10.911 "buf_count": 2048 00:14:10.911 } 00:14:10.911 } 00:14:10.911 ] 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "subsystem": "bdev", 00:14:10.911 "config": [ 00:14:10.911 { 00:14:10.911 "method": "bdev_set_options", 00:14:10.911 "params": { 00:14:10.911 "bdev_io_pool_size": 65535, 00:14:10.911 "bdev_io_cache_size": 256, 00:14:10.911 "bdev_auto_examine": true, 00:14:10.911 "iobuf_small_cache_size": 128, 00:14:10.911 "iobuf_large_cache_size": 16 00:14:10.911 } 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "method": "bdev_raid_set_options", 00:14:10.911 "params": { 00:14:10.911 "process_window_size_kb": 1024 00:14:10.911 } 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "method": "bdev_iscsi_set_options", 00:14:10.911 "params": { 00:14:10.911 "timeout_sec": 30 00:14:10.911 } 00:14:10.911 }, 00:14:10.911 { 00:14:10.911 "method": "bdev_nvme_set_options", 00:14:10.911 "params": { 00:14:10.911 "action_on_timeout": "none", 00:14:10.911 "timeout_us": 0, 00:14:10.911 "timeout_admin_us": 0, 00:14:10.911 "keep_alive_timeout_ms": 10000, 00:14:10.911 "arbitration_burst": 0, 00:14:10.911 "low_priority_weight": 0, 00:14:10.911 "medium_priority_weight": 0, 00:14:10.911 "high_priority_weight": 0, 00:14:10.911 "nvme_adminq_poll_period_us": 10000, 00:14:10.911 "nvme_ioq_poll_period_us": 0, 00:14:10.911 "io_queue_requests": 0, 00:14:10.911 "delay_cmd_submit": true, 00:14:10.911 "transport_retry_count": 4, 00:14:10.911 "bdev_retry_count": 3, 00:14:10.911 "transport_ack_timeout": 0, 00:14:10.911 "ctrlr_loss_timeout_sec": 0, 00:14:10.911 "reconnect_delay_sec": 0, 00:14:10.911 "fast_io_fail_timeout_sec": 0, 00:14:10.911 "disable_auto_failback": false, 00:14:10.911 "generate_uuids": false, 00:14:10.911 "transport_tos": 0, 00:14:10.911 "nvme_error_stat": false, 00:14:10.911 "rdma_srq_size": 0, 00:14:10.911 "io_path_stat": false, 00:14:10.911 "allow_accel_sequence": false, 00:14:10.911 "rdma_max_cq_size": 0, 00:14:10.911 "rdma_cm_event_timeout_ms": 0, 00:14:10.911 "dhchap_digests": [ 00:14:10.911 "sha256", 00:14:10.911 "sha384", 00:14:10.911 "sha512" 00:14:10.911 ], 00:14:10.911 "dhchap_dhgroups": [ 00:14:10.911 "null", 00:14:10.912 "ffdhe2048", 00:14:10.912 "ffdhe3072", 00:14:10.912 "ffdhe4096", 00:14:10.912 "ffdhe6144", 00:14:10.912 "ffdhe8192" 00:14:10.912 ] 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "bdev_nvme_set_hotplug", 00:14:10.912 "params": { 00:14:10.912 "period_us": 100000, 00:14:10.912 "enable": false 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "bdev_malloc_create", 00:14:10.912 "params": { 00:14:10.912 "name": "malloc0", 00:14:10.912 "num_blocks": 8192, 00:14:10.912 "block_size": 4096, 00:14:10.912 "physical_block_size": 4096, 00:14:10.912 "uuid": "21c9798c-4a91-4323-beab-dd687b6d98ff", 00:14:10.912 "optimal_io_boundary": 0 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "bdev_wait_for_examine" 00:14:10.912 } 00:14:10.912 ] 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "subsystem": "nbd", 00:14:10.912 "config": [] 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "subsystem": "scheduler", 00:14:10.912 "config": [ 00:14:10.912 { 00:14:10.912 "method": "framework_set_scheduler", 00:14:10.912 "params": { 00:14:10.912 "name": "static" 00:14:10.912 } 00:14:10.912 } 00:14:10.912 ] 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "subsystem": "nvmf", 00:14:10.912 "config": [ 00:14:10.912 { 00:14:10.912 "method": "nvmf_set_config", 00:14:10.912 "params": { 00:14:10.912 "discovery_filter": "match_any", 00:14:10.912 "admin_cmd_passthru": { 00:14:10.912 "identify_ctrlr": false 00:14:10.912 } 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "nvmf_set_max_subsystems", 00:14:10.912 "params": { 00:14:10.912 "max_subsystems": 1024 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "nvmf_set_crdt", 00:14:10.912 "params": { 00:14:10.912 "crdt1": 0, 00:14:10.912 "crdt2": 0, 00:14:10.912 "crdt3": 0 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "nvmf_create_transport", 00:14:10.912 "params": { 00:14:10.912 "trtype": "TCP", 00:14:10.912 "max_queue_depth": 128, 00:14:10.912 "max_io_qpairs_per_ctrlr": 127, 00:14:10.912 "in_capsule_data_size": 4096, 00:14:10.912 "max_io_size": 131072, 00:14:10.912 "io_unit_size": 131072, 00:14:10.912 "max_aq_depth": 128, 00:14:10.912 "num_shared_buffers": 511, 00:14:10.912 "buf_cache_size": 4294967295, 00:14:10.912 "dif_insert_or_strip": false, 00:14:10.912 "zcopy": false, 00:14:10.912 "c2h_success": false, 00:14:10.912 "sock_priority": 0, 00:14:10.912 "abort_timeout_sec": 1, 00:14:10.912 "ack_timeout": 0, 00:14:10.912 "data_wr_pool_size": 0 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "nvmf_create_subsystem", 00:14:10.912 "params": { 00:14:10.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.912 "allow_any_host": false, 00:14:10.912 "serial_number": "00000000000000000000", 00:14:10.912 "model_number": "SPDK bdev Controller", 00:14:10.912 "max_namespaces": 32, 00:14:10.912 "min_cntlid": 1, 00:14:10.912 "max_cntlid": 65519, 00:14:10.912 "ana_reporting": false 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "nvmf_subsystem_add_host", 00:14:10.912 "params": { 00:14:10.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.912 "host": "nqn.2016-06.io.spdk:host1", 00:14:10.912 "psk": "key0" 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "nvmf_subsystem_add_ns", 00:14:10.912 "params": { 00:14:10.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.912 "namespace": { 00:14:10.912 "nsid": 1, 00:14:10.912 "bdev_name": "malloc0", 00:14:10.912 "nguid": "21C9798C4A914323BEABDD687B6D98FF", 00:14:10.912 "uuid": "21c9798c-4a91-4323-beab-dd687b6d98ff", 00:14:10.912 "no_auto_visible": false 00:14:10.912 } 00:14:10.912 } 00:14:10.912 }, 00:14:10.912 { 00:14:10.912 "method": "nvmf_subsystem_add_listener", 00:14:10.912 "params": { 00:14:10.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.912 "listen_address": { 00:14:10.912 "trtype": "TCP", 00:14:10.912 "adrfam": "IPv4", 00:14:10.912 "traddr": "10.0.0.2", 00:14:10.912 "trsvcid": "4420" 00:14:10.912 }, 00:14:10.912 "secure_channel": true 00:14:10.912 } 00:14:10.912 } 00:14:10.912 ] 00:14:10.912 } 00:14:10.912 ] 00:14:10.912 }' 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74115 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74115 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74115 ']' 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.912 11:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.912 [2024-07-12 11:38:14.331173] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:10.912 [2024-07-12 11:38:14.331476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.171 [2024-07-12 11:38:14.468178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.171 [2024-07-12 11:38:14.575265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.171 [2024-07-12 11:38:14.575551] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.171 [2024-07-12 11:38:14.575714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.171 [2024-07-12 11:38:14.575850] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.171 [2024-07-12 11:38:14.575887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.171 [2024-07-12 11:38:14.576061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.430 [2024-07-12 11:38:14.743191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:11.430 [2024-07-12 11:38:14.821118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.430 [2024-07-12 11:38:14.853051] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:11.430 [2024-07-12 11:38:14.853280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74147 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74147 /var/tmp/bdevperf.sock 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74147 ']' 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.998 11:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:11.998 "subsystems": [ 00:14:11.998 { 00:14:11.998 "subsystem": "keyring", 00:14:11.998 "config": [ 00:14:11.998 { 00:14:11.998 "method": "keyring_file_add_key", 00:14:11.998 "params": { 00:14:11.998 "name": "key0", 00:14:11.998 "path": "/tmp/tmp.PG0CDOhMqD" 00:14:11.998 } 00:14:11.998 } 00:14:11.998 ] 00:14:11.998 }, 00:14:11.998 { 00:14:11.998 "subsystem": "iobuf", 00:14:11.998 "config": [ 00:14:11.998 { 00:14:11.998 "method": "iobuf_set_options", 00:14:11.998 "params": { 00:14:11.998 "small_pool_count": 8192, 00:14:11.998 "large_pool_count": 1024, 00:14:11.998 "small_bufsize": 8192, 00:14:11.998 "large_bufsize": 135168 00:14:11.998 } 00:14:11.998 } 00:14:11.998 ] 00:14:11.998 }, 00:14:11.998 { 00:14:11.998 "subsystem": "sock", 00:14:11.998 "config": [ 00:14:11.998 { 00:14:11.998 "method": "sock_set_default_impl", 00:14:11.998 "params": { 00:14:11.998 "impl_name": "uring" 00:14:11.998 } 00:14:11.998 }, 00:14:11.998 { 00:14:11.998 "method": "sock_impl_set_options", 00:14:11.998 "params": { 00:14:11.998 "impl_name": "ssl", 00:14:11.998 "recv_buf_size": 4096, 00:14:11.998 "send_buf_size": 4096, 00:14:11.998 "enable_recv_pipe": true, 00:14:11.998 "enable_quickack": false, 00:14:11.998 "enable_placement_id": 0, 00:14:11.998 "enable_zerocopy_send_server": true, 00:14:11.998 "enable_zerocopy_send_client": false, 00:14:11.998 "zerocopy_threshold": 0, 00:14:11.998 "tls_version": 0, 00:14:11.998 "enable_ktls": false 00:14:11.998 } 00:14:11.998 }, 00:14:11.998 { 00:14:11.998 "method": "sock_impl_set_options", 00:14:11.998 "params": { 00:14:11.998 "impl_name": "posix", 00:14:11.998 "recv_buf_size": 2097152, 00:14:11.998 "send_buf_size": 2097152, 00:14:11.998 "enable_recv_pipe": true, 00:14:11.998 "enable_quickack": false, 00:14:11.998 "enable_placement_id": 0, 00:14:11.998 "enable_zerocopy_send_server": true, 00:14:11.998 "enable_zerocopy_send_client": false, 00:14:11.999 "zerocopy_threshold": 0, 00:14:11.999 "tls_version": 0, 00:14:11.999 "enable_ktls": false 00:14:11.999 } 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "method": "sock_impl_set_options", 00:14:11.999 "params": { 00:14:11.999 "impl_name": "uring", 00:14:11.999 "recv_buf_size": 2097152, 00:14:11.999 "send_buf_size": 2097152, 00:14:11.999 "enable_recv_pipe": true, 00:14:11.999 "enable_quickack": false, 00:14:11.999 "enable_placement_id": 0, 00:14:11.999 "enable_zerocopy_send_server": false, 00:14:11.999 "enable_zerocopy_send_client": false, 00:14:11.999 "zerocopy_threshold": 0, 00:14:11.999 "tls_version": 0, 00:14:11.999 "enable_ktls": false 00:14:11.999 } 00:14:11.999 } 00:14:11.999 ] 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "subsystem": "vmd", 00:14:11.999 "config": [] 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "subsystem": "accel", 00:14:11.999 "config": [ 00:14:11.999 { 00:14:11.999 "method": "accel_set_options", 00:14:11.999 "params": { 00:14:11.999 "small_cache_size": 128, 00:14:11.999 "large_cache_size": 16, 00:14:11.999 "task_count": 2048, 00:14:11.999 "sequence_count": 2048, 00:14:11.999 "buf_count": 2048 00:14:11.999 } 00:14:11.999 } 00:14:11.999 ] 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "subsystem": "bdev", 00:14:11.999 "config": [ 00:14:11.999 { 00:14:11.999 "method": "bdev_set_options", 00:14:11.999 "params": { 00:14:11.999 "bdev_io_pool_size": 65535, 00:14:11.999 "bdev_io_cache_size": 256, 00:14:11.999 "bdev_auto_examine": true, 00:14:11.999 "iobuf_small_cache_size": 128, 00:14:11.999 "iobuf_large_cache_size": 16 00:14:11.999 } 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "method": "bdev_raid_set_options", 00:14:11.999 "params": { 00:14:11.999 "process_window_size_kb": 1024 00:14:11.999 } 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "method": "bdev_iscsi_set_options", 00:14:11.999 "params": { 00:14:11.999 "timeout_sec": 30 00:14:11.999 } 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "method": "bdev_nvme_set_options", 00:14:11.999 "params": { 00:14:11.999 "action_on_timeout": "none", 00:14:11.999 "timeout_us": 0, 00:14:11.999 "timeout_admin_us": 0, 00:14:11.999 "keep_alive_timeout_ms": 10000, 00:14:11.999 "arbitration_burst": 0, 00:14:11.999 "low_priority_weight": 0, 00:14:11.999 "medium_priority_weight": 0, 00:14:11.999 "high_priority_weight": 0, 00:14:11.999 "nvme_adminq_poll_period_us": 10000, 00:14:11.999 "nvme_ioq_poll_period_us": 0, 00:14:11.999 "io_queue_requests": 512, 00:14:11.999 "delay_cmd_submit": true, 00:14:11.999 "transport_retry_count": 4, 00:14:11.999 "bdev_retry_count": 3, 00:14:11.999 "transport_ack_timeout": 0, 00:14:11.999 "ctrlr_loss_timeout_sec": 0, 00:14:11.999 "reconnect_delay_sec": 0, 00:14:11.999 "fast_io_fail_timeout_sec": 0, 00:14:11.999 "disable_auto_failback": false, 00:14:11.999 "generate_uuids": false, 00:14:11.999 "transport_tos": 0, 00:14:11.999 "nvme_error_stat": false, 00:14:11.999 "rdma_srq_size": 0, 00:14:11.999 "io_path_stat": false, 00:14:11.999 "allow_accel_sequence": false, 00:14:11.999 "rdma_max_cq_size": 0, 00:14:11.999 "rdma_cm_event_timeout_ms": 0, 00:14:11.999 "dhchap_digests": [ 00:14:11.999 "sha256", 00:14:11.999 "sha384", 00:14:11.999 "sha512" 00:14:11.999 ], 00:14:11.999 "dhchap_dhgroups": [ 00:14:11.999 "null", 00:14:11.999 "ffdhe2048", 00:14:11.999 "ffdhe3072", 00:14:11.999 "ffdhe4096", 00:14:11.999 "ffdhe6144", 00:14:11.999 "ffdhe8192" 00:14:11.999 ] 00:14:11.999 } 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "method": "bdev_nvme_attach_controller", 00:14:11.999 "params": { 00:14:11.999 "name": "nvme0", 00:14:11.999 "trtype": "TCP", 00:14:11.999 "adrfam": "IPv4", 00:14:11.999 "traddr": "10.0.0.2", 00:14:11.999 "trsvcid": "4420", 00:14:11.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.999 "prchk_reftag": false, 00:14:11.999 "prchk_guard": false, 00:14:11.999 "ctrlr_loss_timeout_sec": 0, 00:14:11.999 "reconnect_delay_sec": 0, 00:14:11.999 "fast_io_fail_timeout_sec": 0, 00:14:11.999 "psk": "key0", 00:14:11.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.999 "hdgst": false, 00:14:11.999 "ddgst": false 00:14:11.999 } 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "method": "bdev_nvme_set_hotplug", 00:14:11.999 "params": { 00:14:11.999 "period_us": 100000, 00:14:11.999 "enable": false 00:14:11.999 } 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "method": "bdev_enable_histogram", 00:14:11.999 "params": { 00:14:11.999 "name": "nvme0n1", 00:14:11.999 "enable": true 00:14:11.999 } 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "method": "bdev_wait_for_examine" 00:14:11.999 } 00:14:11.999 ] 00:14:11.999 }, 00:14:11.999 { 00:14:11.999 "subsystem": "nbd", 00:14:11.999 "config": [] 00:14:11.999 } 00:14:11.999 ] 00:14:11.999 }' 00:14:11.999 11:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.999 [2024-07-12 11:38:15.382056] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:11.999 [2024-07-12 11:38:15.382344] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74147 ] 00:14:12.258 [2024-07-12 11:38:15.523565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.258 [2024-07-12 11:38:15.668829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.516 [2024-07-12 11:38:15.804632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.516 [2024-07-12 11:38:15.850036] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:13.082 11:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.082 11:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:13.082 11:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:13.082 11:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:13.357 11:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.357 11:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:13.622 Running I/O for 1 seconds... 00:14:14.556 00:14:14.556 Latency(us) 00:14:14.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.556 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:14.556 Verification LBA range: start 0x0 length 0x2000 00:14:14.556 nvme0n1 : 1.02 4010.98 15.67 0.00 0.00 31563.57 9651.67 22043.93 00:14:14.556 =================================================================================================================== 00:14:14.556 Total : 4010.98 15.67 0.00 0.00 31563.57 9651.67 22043.93 00:14:14.556 0 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:14.556 nvmf_trace.0 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74147 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74147 ']' 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74147 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74147 00:14:14.556 killing process with pid 74147 00:14:14.556 Received shutdown signal, test time was about 1.000000 seconds 00:14:14.556 00:14:14.556 Latency(us) 00:14:14.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.556 =================================================================================================================== 00:14:14.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74147' 00:14:14.556 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74147 00:14:14.557 11:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74147 00:14:14.814 11:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:14.814 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:14.814 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:14.814 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:14.814 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:14.814 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.814 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:14.814 rmmod nvme_tcp 00:14:14.814 rmmod nvme_fabrics 00:14:14.814 rmmod nvme_keyring 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74115 ']' 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74115 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74115 ']' 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74115 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74115 00:14:15.073 killing process with pid 74115 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74115' 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74115 00:14:15.073 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74115 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.vRxMqbRYFB /tmp/tmp.kLYkV1JOjL /tmp/tmp.PG0CDOhMqD 00:14:15.332 00:14:15.332 real 1m27.568s 00:14:15.332 user 2m20.618s 00:14:15.332 sys 0m27.456s 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.332 11:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.332 ************************************ 00:14:15.332 END TEST nvmf_tls 00:14:15.332 ************************************ 00:14:15.332 11:38:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:15.332 11:38:18 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:15.332 11:38:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:15.332 11:38:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.332 11:38:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.332 ************************************ 00:14:15.332 START TEST nvmf_fips 00:14:15.332 ************************************ 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:15.332 * Looking for test storage... 00:14:15.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.332 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:15.333 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:15.334 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:15.592 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:15.592 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:15.592 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:15.593 Error setting digest 00:14:15.593 00E22A21517F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:15.593 00E22A21517F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:15.593 Cannot find device "nvmf_tgt_br" 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.593 Cannot find device "nvmf_tgt_br2" 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:15.593 Cannot find device "nvmf_tgt_br" 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:15.593 Cannot find device "nvmf_tgt_br2" 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:15.593 11:38:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:15.593 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.593 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:15.593 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.593 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:15.593 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:15.593 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:15.593 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:15.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:14:15.852 00:14:15.852 --- 10.0.0.2 ping statistics --- 00:14:15.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.852 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:15.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:15.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:14:15.852 00:14:15.852 --- 10.0.0.3 ping statistics --- 00:14:15.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.852 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:15.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:15.852 00:14:15.852 --- 10.0.0.1 ping statistics --- 00:14:15.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.852 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74419 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74419 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74419 ']' 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.852 11:38:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:16.111 [2024-07-12 11:38:19.322379] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:16.111 [2024-07-12 11:38:19.322463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.111 [2024-07-12 11:38:19.458066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.369 [2024-07-12 11:38:19.562589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.369 [2024-07-12 11:38:19.562639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.369 [2024-07-12 11:38:19.562651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.369 [2024-07-12 11:38:19.562659] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.369 [2024-07-12 11:38:19.562667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.369 [2024-07-12 11:38:19.562692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.369 [2024-07-12 11:38:19.618183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:16.945 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.222 [2024-07-12 11:38:20.552472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.222 [2024-07-12 11:38:20.568411] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:17.222 [2024-07-12 11:38:20.568609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.222 [2024-07-12 11:38:20.599615] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:17.222 malloc0 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74453 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74453 /var/tmp/bdevperf.sock 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74453 ']' 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.222 11:38:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:17.480 [2024-07-12 11:38:20.688560] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:17.480 [2024-07-12 11:38:20.688650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74453 ] 00:14:17.480 [2024-07-12 11:38:20.821478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.739 [2024-07-12 11:38:20.945777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.739 [2024-07-12 11:38:21.005916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:18.305 11:38:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.305 11:38:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:18.305 11:38:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:18.563 [2024-07-12 11:38:21.898626] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.563 [2024-07-12 11:38:21.898753] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:18.563 TLSTESTn1 00:14:18.563 11:38:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:18.821 Running I/O for 10 seconds... 00:14:28.803 00:14:28.803 Latency(us) 00:14:28.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.803 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:28.803 Verification LBA range: start 0x0 length 0x2000 00:14:28.803 TLSTESTn1 : 10.02 3856.35 15.06 0.00 0.00 33126.69 7477.06 23950.43 00:14:28.803 =================================================================================================================== 00:14:28.803 Total : 3856.35 15.06 0.00 0.00 33126.69 7477.06 23950.43 00:14:28.803 0 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:28.803 nvmf_trace.0 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74453 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74453 ']' 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74453 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.803 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74453 00:14:29.061 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:29.061 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:29.061 killing process with pid 74453 00:14:29.061 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74453' 00:14:29.061 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74453 00:14:29.061 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.061 00:14:29.061 Latency(us) 00:14:29.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.061 =================================================================================================================== 00:14:29.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.061 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74453 00:14:29.061 [2024-07-12 11:38:32.257880] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:29.061 11:38:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:29.061 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.061 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.320 rmmod nvme_tcp 00:14:29.320 rmmod nvme_fabrics 00:14:29.320 rmmod nvme_keyring 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74419 ']' 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74419 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74419 ']' 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74419 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74419 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:29.320 killing process with pid 74419 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74419' 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74419 00:14:29.320 [2024-07-12 11:38:32.620507] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:29.320 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74419 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:29.580 00:14:29.580 real 0m14.276s 00:14:29.580 user 0m19.192s 00:14:29.580 sys 0m5.927s 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.580 11:38:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:29.580 ************************************ 00:14:29.580 END TEST nvmf_fips 00:14:29.580 ************************************ 00:14:29.580 11:38:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:29.580 11:38:32 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:14:29.580 11:38:32 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:14:29.580 11:38:32 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:29.580 11:38:32 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.580 11:38:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.580 11:38:32 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:29.580 11:38:32 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.580 11:38:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.580 11:38:32 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:29.580 11:38:32 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:29.580 11:38:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:29.580 11:38:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.580 11:38:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.580 ************************************ 00:14:29.580 START TEST nvmf_identify 00:14:29.580 ************************************ 00:14:29.580 11:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:29.838 * Looking for test storage... 00:14:29.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:29.838 Cannot find device "nvmf_tgt_br" 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:29.838 Cannot find device "nvmf_tgt_br2" 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:29.838 Cannot find device "nvmf_tgt_br" 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:29.838 Cannot find device "nvmf_tgt_br2" 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:29.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:29.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:29.838 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:30.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:30.097 00:14:30.097 --- 10.0.0.2 ping statistics --- 00:14:30.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.097 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:30.097 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:30.097 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:14:30.097 00:14:30.097 --- 10.0.0.3 ping statistics --- 00:14:30.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.097 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:30.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:14:30.097 00:14:30.097 --- 10.0.0.1 ping statistics --- 00:14:30.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.097 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74797 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74797 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74797 ']' 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.097 11:38:33 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.097 [2024-07-12 11:38:33.499212] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:30.097 [2024-07-12 11:38:33.499318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.355 [2024-07-12 11:38:33.640171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.355 [2024-07-12 11:38:33.761221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.355 [2024-07-12 11:38:33.761292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.355 [2024-07-12 11:38:33.761342] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.355 [2024-07-12 11:38:33.761354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.355 [2024-07-12 11:38:33.761364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.355 [2024-07-12 11:38:33.761700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.355 [2024-07-12 11:38:33.762063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.355 [2024-07-12 11:38:33.762195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.355 [2024-07-12 11:38:33.762204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.613 [2024-07-12 11:38:33.830619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 [2024-07-12 11:38:34.485886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 Malloc0 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 [2024-07-12 11:38:34.583634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.180 [ 00:14:31.180 { 00:14:31.180 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:31.180 "subtype": "Discovery", 00:14:31.180 "listen_addresses": [ 00:14:31.180 { 00:14:31.180 "trtype": "TCP", 00:14:31.180 "adrfam": "IPv4", 00:14:31.180 "traddr": "10.0.0.2", 00:14:31.180 "trsvcid": "4420" 00:14:31.180 } 00:14:31.180 ], 00:14:31.180 "allow_any_host": true, 00:14:31.180 "hosts": [] 00:14:31.180 }, 00:14:31.180 { 00:14:31.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.180 "subtype": "NVMe", 00:14:31.180 "listen_addresses": [ 00:14:31.180 { 00:14:31.180 "trtype": "TCP", 00:14:31.180 "adrfam": "IPv4", 00:14:31.180 "traddr": "10.0.0.2", 00:14:31.180 "trsvcid": "4420" 00:14:31.180 } 00:14:31.180 ], 00:14:31.180 "allow_any_host": true, 00:14:31.180 "hosts": [], 00:14:31.180 "serial_number": "SPDK00000000000001", 00:14:31.180 "model_number": "SPDK bdev Controller", 00:14:31.180 "max_namespaces": 32, 00:14:31.180 "min_cntlid": 1, 00:14:31.180 "max_cntlid": 65519, 00:14:31.180 "namespaces": [ 00:14:31.180 { 00:14:31.180 "nsid": 1, 00:14:31.180 "bdev_name": "Malloc0", 00:14:31.180 "name": "Malloc0", 00:14:31.180 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:31.180 "eui64": "ABCDEF0123456789", 00:14:31.180 "uuid": "dec4b29f-cd8b-4593-aa38-d973c50602bb" 00:14:31.180 } 00:14:31.180 ] 00:14:31.180 } 00:14:31.180 ] 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.180 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:31.513 [2024-07-12 11:38:34.637716] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:31.513 [2024-07-12 11:38:34.637787] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74832 ] 00:14:31.513 [2024-07-12 11:38:34.781832] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:31.513 [2024-07-12 11:38:34.781908] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:31.513 [2024-07-12 11:38:34.781916] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:31.513 [2024-07-12 11:38:34.781931] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:31.513 [2024-07-12 11:38:34.781938] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:31.513 [2024-07-12 11:38:34.782101] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:31.513 [2024-07-12 11:38:34.782156] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13522c0 0 00:14:31.513 [2024-07-12 11:38:34.794601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:31.513 [2024-07-12 11:38:34.794625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:31.513 [2024-07-12 11:38:34.794631] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:31.513 [2024-07-12 11:38:34.794635] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:31.513 [2024-07-12 11:38:34.794685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.794693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.794697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.513 [2024-07-12 11:38:34.794713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:31.513 [2024-07-12 11:38:34.794746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.513 [2024-07-12 11:38:34.802597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.513 [2024-07-12 11:38:34.802622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.513 [2024-07-12 11:38:34.802627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.802633] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.513 [2024-07-12 11:38:34.802649] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:31.513 [2024-07-12 11:38:34.802659] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:31.513 [2024-07-12 11:38:34.802665] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:31.513 [2024-07-12 11:38:34.802686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.802692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.802697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.513 [2024-07-12 11:38:34.802708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.513 [2024-07-12 11:38:34.802738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.513 [2024-07-12 11:38:34.802804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.513 [2024-07-12 11:38:34.802812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.513 [2024-07-12 11:38:34.802816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.802820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.513 [2024-07-12 11:38:34.802826] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:31.513 [2024-07-12 11:38:34.802834] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:31.513 [2024-07-12 11:38:34.802843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.802847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.802851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.513 [2024-07-12 11:38:34.802859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.513 [2024-07-12 11:38:34.802878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.513 [2024-07-12 11:38:34.802922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.513 [2024-07-12 11:38:34.802929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.513 [2024-07-12 11:38:34.802933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.802937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.513 [2024-07-12 11:38:34.802943] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:31.513 [2024-07-12 11:38:34.802952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:31.513 [2024-07-12 11:38:34.802960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.802964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.802968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.513 [2024-07-12 11:38:34.802975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.513 [2024-07-12 11:38:34.802993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.513 [2024-07-12 11:38:34.803048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.513 [2024-07-12 11:38:34.803055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.513 [2024-07-12 11:38:34.803058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.513 [2024-07-12 11:38:34.803068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:31.513 [2024-07-12 11:38:34.803079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.513 [2024-07-12 11:38:34.803094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.513 [2024-07-12 11:38:34.803111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.513 [2024-07-12 11:38:34.803158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.513 [2024-07-12 11:38:34.803165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.513 [2024-07-12 11:38:34.803169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.513 [2024-07-12 11:38:34.803178] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:31.513 [2024-07-12 11:38:34.803184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:31.513 [2024-07-12 11:38:34.803192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:31.513 [2024-07-12 11:38:34.803297] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:31.513 [2024-07-12 11:38:34.803303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:31.513 [2024-07-12 11:38:34.803313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.513 [2024-07-12 11:38:34.803329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.513 [2024-07-12 11:38:34.803347] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.513 [2024-07-12 11:38:34.803410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.513 [2024-07-12 11:38:34.803418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.513 [2024-07-12 11:38:34.803422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.513 [2024-07-12 11:38:34.803431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:31.513 [2024-07-12 11:38:34.803442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.513 [2024-07-12 11:38:34.803458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.513 [2024-07-12 11:38:34.803476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.513 [2024-07-12 11:38:34.803521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.513 [2024-07-12 11:38:34.803527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.513 [2024-07-12 11:38:34.803531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.513 [2024-07-12 11:38:34.803540] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:31.513 [2024-07-12 11:38:34.803546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:31.513 [2024-07-12 11:38:34.803554] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:31.513 [2024-07-12 11:38:34.803565] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:31.513 [2024-07-12 11:38:34.803577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.513 [2024-07-12 11:38:34.803596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.513 [2024-07-12 11:38:34.803604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.513 [2024-07-12 11:38:34.803625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.514 [2024-07-12 11:38:34.803708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.514 [2024-07-12 11:38:34.803715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.514 [2024-07-12 11:38:34.803719] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803723] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13522c0): datao=0, datal=4096, cccid=0 00:14:31.514 [2024-07-12 11:38:34.803729] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1393940) on tqpair(0x13522c0): expected_datao=0, payload_size=4096 00:14:31.514 [2024-07-12 11:38:34.803734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803743] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803748] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.514 [2024-07-12 11:38:34.803762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.514 [2024-07-12 11:38:34.803766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.514 [2024-07-12 11:38:34.803780] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:31.514 [2024-07-12 11:38:34.803786] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:31.514 [2024-07-12 11:38:34.803790] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:31.514 [2024-07-12 11:38:34.803796] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:31.514 [2024-07-12 11:38:34.803801] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:31.514 [2024-07-12 11:38:34.803806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:31.514 [2024-07-12 11:38:34.803815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:31.514 [2024-07-12 11:38:34.803823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.803839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:31.514 [2024-07-12 11:38:34.803858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.514 [2024-07-12 11:38:34.803917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.514 [2024-07-12 11:38:34.803924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.514 [2024-07-12 11:38:34.803928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.514 [2024-07-12 11:38:34.803940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.803955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.514 [2024-07-12 11:38:34.803962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.803976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.514 [2024-07-12 11:38:34.803983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.803991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.803997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.514 [2024-07-12 11:38:34.804004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804012] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.804017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.514 [2024-07-12 11:38:34.804023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:31.514 [2024-07-12 11:38:34.804036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:31.514 [2024-07-12 11:38:34.804045] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.804056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.514 [2024-07-12 11:38:34.804076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393940, cid 0, qid 0 00:14:31.514 [2024-07-12 11:38:34.804083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393ac0, cid 1, qid 0 00:14:31.514 [2024-07-12 11:38:34.804088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393c40, cid 2, qid 0 00:14:31.514 [2024-07-12 11:38:34.804094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.514 [2024-07-12 11:38:34.804099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393f40, cid 4, qid 0 00:14:31.514 [2024-07-12 11:38:34.804183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.514 [2024-07-12 11:38:34.804190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.514 [2024-07-12 11:38:34.804193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393f40) on tqpair=0x13522c0 00:14:31.514 [2024-07-12 11:38:34.804203] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:31.514 [2024-07-12 11:38:34.804213] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:31.514 [2024-07-12 11:38:34.804225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.804237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.514 [2024-07-12 11:38:34.804256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393f40, cid 4, qid 0 00:14:31.514 [2024-07-12 11:38:34.804309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.514 [2024-07-12 11:38:34.804316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.514 [2024-07-12 11:38:34.804320] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804324] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13522c0): datao=0, datal=4096, cccid=4 00:14:31.514 [2024-07-12 11:38:34.804329] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1393f40) on tqpair(0x13522c0): expected_datao=0, payload_size=4096 00:14:31.514 [2024-07-12 11:38:34.804333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804341] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804345] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.514 [2024-07-12 11:38:34.804359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.514 [2024-07-12 11:38:34.804363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393f40) on tqpair=0x13522c0 00:14:31.514 [2024-07-12 11:38:34.804381] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:31.514 [2024-07-12 11:38:34.804412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.804428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.514 [2024-07-12 11:38:34.804436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.804450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.514 [2024-07-12 11:38:34.804475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393f40, cid 4, qid 0 00:14:31.514 [2024-07-12 11:38:34.804483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13940c0, cid 5, qid 0 00:14:31.514 [2024-07-12 11:38:34.804603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.514 [2024-07-12 11:38:34.804612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.514 [2024-07-12 11:38:34.804616] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804620] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13522c0): datao=0, datal=1024, cccid=4 00:14:31.514 [2024-07-12 11:38:34.804625] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1393f40) on tqpair(0x13522c0): expected_datao=0, payload_size=1024 00:14:31.514 [2024-07-12 11:38:34.804630] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804637] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804641] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.514 [2024-07-12 11:38:34.804653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.514 [2024-07-12 11:38:34.804657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13940c0) on tqpair=0x13522c0 00:14:31.514 [2024-07-12 11:38:34.804680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.514 [2024-07-12 11:38:34.804688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.514 [2024-07-12 11:38:34.804692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393f40) on tqpair=0x13522c0 00:14:31.514 [2024-07-12 11:38:34.804708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.514 [2024-07-12 11:38:34.804713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13522c0) 00:14:31.514 [2024-07-12 11:38:34.804721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.514 [2024-07-12 11:38:34.804745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393f40, cid 4, qid 0 00:14:31.514 [2024-07-12 11:38:34.804810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.514 [2024-07-12 11:38:34.804817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.514 [2024-07-12 11:38:34.804821] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.804825] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13522c0): datao=0, datal=3072, cccid=4 00:14:31.515 [2024-07-12 11:38:34.804830] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1393f40) on tqpair(0x13522c0): expected_datao=0, payload_size=3072 00:14:31.515 [2024-07-12 11:38:34.804835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.804842] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.804846] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.804854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.515 [2024-07-12 11:38:34.804860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.515 [2024-07-12 11:38:34.804864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.804868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393f40) on tqpair=0x13522c0 00:14:31.515 [2024-07-12 11:38:34.804879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.804883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13522c0) 00:14:31.515 [2024-07-12 11:38:34.804891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.515 [2024-07-12 11:38:34.804913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393f40, cid 4, qid 0 00:14:31.515 [2024-07-12 11:38:34.804973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.515 [2024-07-12 11:38:34.804980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.515 [2024-07-12 11:38:34.804984] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.804988] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13522c0): datao=0, datal=8, cccid=4 00:14:31.515 [2024-07-12 11:38:34.804993] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1393f40) on tqpair(0x13522c0): expected_datao=0, payload_size=8 00:14:31.515 [2024-07-12 11:38:34.804997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.805004] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.805008] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.805023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.515 [2024-07-12 11:38:34.805030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.515 [2024-07-12 11:38:34.805034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.515 [2024-07-12 11:38:34.805038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393f40) on tqpair=0x13522c0 00:14:31.515 ===================================================== 00:14:31.515 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:31.515 ===================================================== 00:14:31.515 Controller Capabilities/Features 00:14:31.515 ================================ 00:14:31.515 Vendor ID: 0000 00:14:31.515 Subsystem Vendor ID: 0000 00:14:31.515 Serial Number: .................... 00:14:31.515 Model Number: ........................................ 00:14:31.515 Firmware Version: 24.09 00:14:31.515 Recommended Arb Burst: 0 00:14:31.515 IEEE OUI Identifier: 00 00 00 00:14:31.515 Multi-path I/O 00:14:31.515 May have multiple subsystem ports: No 00:14:31.515 May have multiple controllers: No 00:14:31.515 Associated with SR-IOV VF: No 00:14:31.515 Max Data Transfer Size: 131072 00:14:31.515 Max Number of Namespaces: 0 00:14:31.515 Max Number of I/O Queues: 1024 00:14:31.515 NVMe Specification Version (VS): 1.3 00:14:31.515 NVMe Specification Version (Identify): 1.3 00:14:31.515 Maximum Queue Entries: 128 00:14:31.515 Contiguous Queues Required: Yes 00:14:31.515 Arbitration Mechanisms Supported 00:14:31.515 Weighted Round Robin: Not Supported 00:14:31.515 Vendor Specific: Not Supported 00:14:31.515 Reset Timeout: 15000 ms 00:14:31.515 Doorbell Stride: 4 bytes 00:14:31.515 NVM Subsystem Reset: Not Supported 00:14:31.515 Command Sets Supported 00:14:31.515 NVM Command Set: Supported 00:14:31.515 Boot Partition: Not Supported 00:14:31.515 Memory Page Size Minimum: 4096 bytes 00:14:31.515 Memory Page Size Maximum: 4096 bytes 00:14:31.515 Persistent Memory Region: Not Supported 00:14:31.515 Optional Asynchronous Events Supported 00:14:31.515 Namespace Attribute Notices: Not Supported 00:14:31.515 Firmware Activation Notices: Not Supported 00:14:31.515 ANA Change Notices: Not Supported 00:14:31.515 PLE Aggregate Log Change Notices: Not Supported 00:14:31.515 LBA Status Info Alert Notices: Not Supported 00:14:31.515 EGE Aggregate Log Change Notices: Not Supported 00:14:31.515 Normal NVM Subsystem Shutdown event: Not Supported 00:14:31.515 Zone Descriptor Change Notices: Not Supported 00:14:31.515 Discovery Log Change Notices: Supported 00:14:31.515 Controller Attributes 00:14:31.515 128-bit Host Identifier: Not Supported 00:14:31.515 Non-Operational Permissive Mode: Not Supported 00:14:31.515 NVM Sets: Not Supported 00:14:31.515 Read Recovery Levels: Not Supported 00:14:31.515 Endurance Groups: Not Supported 00:14:31.515 Predictable Latency Mode: Not Supported 00:14:31.515 Traffic Based Keep ALive: Not Supported 00:14:31.515 Namespace Granularity: Not Supported 00:14:31.515 SQ Associations: Not Supported 00:14:31.515 UUID List: Not Supported 00:14:31.515 Multi-Domain Subsystem: Not Supported 00:14:31.515 Fixed Capacity Management: Not Supported 00:14:31.515 Variable Capacity Management: Not Supported 00:14:31.515 Delete Endurance Group: Not Supported 00:14:31.515 Delete NVM Set: Not Supported 00:14:31.515 Extended LBA Formats Supported: Not Supported 00:14:31.515 Flexible Data Placement Supported: Not Supported 00:14:31.515 00:14:31.515 Controller Memory Buffer Support 00:14:31.515 ================================ 00:14:31.515 Supported: No 00:14:31.515 00:14:31.515 Persistent Memory Region Support 00:14:31.515 ================================ 00:14:31.515 Supported: No 00:14:31.515 00:14:31.515 Admin Command Set Attributes 00:14:31.515 ============================ 00:14:31.515 Security Send/Receive: Not Supported 00:14:31.515 Format NVM: Not Supported 00:14:31.515 Firmware Activate/Download: Not Supported 00:14:31.515 Namespace Management: Not Supported 00:14:31.515 Device Self-Test: Not Supported 00:14:31.515 Directives: Not Supported 00:14:31.515 NVMe-MI: Not Supported 00:14:31.515 Virtualization Management: Not Supported 00:14:31.515 Doorbell Buffer Config: Not Supported 00:14:31.515 Get LBA Status Capability: Not Supported 00:14:31.515 Command & Feature Lockdown Capability: Not Supported 00:14:31.515 Abort Command Limit: 1 00:14:31.515 Async Event Request Limit: 4 00:14:31.515 Number of Firmware Slots: N/A 00:14:31.515 Firmware Slot 1 Read-Only: N/A 00:14:31.515 Firmware Activation Without Reset: N/A 00:14:31.515 Multiple Update Detection Support: N/A 00:14:31.515 Firmware Update Granularity: No Information Provided 00:14:31.515 Per-Namespace SMART Log: No 00:14:31.515 Asymmetric Namespace Access Log Page: Not Supported 00:14:31.515 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:31.515 Command Effects Log Page: Not Supported 00:14:31.515 Get Log Page Extended Data: Supported 00:14:31.515 Telemetry Log Pages: Not Supported 00:14:31.515 Persistent Event Log Pages: Not Supported 00:14:31.515 Supported Log Pages Log Page: May Support 00:14:31.515 Commands Supported & Effects Log Page: Not Supported 00:14:31.515 Feature Identifiers & Effects Log Page:May Support 00:14:31.515 NVMe-MI Commands & Effects Log Page: May Support 00:14:31.515 Data Area 4 for Telemetry Log: Not Supported 00:14:31.515 Error Log Page Entries Supported: 128 00:14:31.515 Keep Alive: Not Supported 00:14:31.515 00:14:31.515 NVM Command Set Attributes 00:14:31.515 ========================== 00:14:31.515 Submission Queue Entry Size 00:14:31.515 Max: 1 00:14:31.515 Min: 1 00:14:31.515 Completion Queue Entry Size 00:14:31.515 Max: 1 00:14:31.515 Min: 1 00:14:31.515 Number of Namespaces: 0 00:14:31.515 Compare Command: Not Supported 00:14:31.515 Write Uncorrectable Command: Not Supported 00:14:31.515 Dataset Management Command: Not Supported 00:14:31.515 Write Zeroes Command: Not Supported 00:14:31.515 Set Features Save Field: Not Supported 00:14:31.515 Reservations: Not Supported 00:14:31.515 Timestamp: Not Supported 00:14:31.515 Copy: Not Supported 00:14:31.515 Volatile Write Cache: Not Present 00:14:31.515 Atomic Write Unit (Normal): 1 00:14:31.515 Atomic Write Unit (PFail): 1 00:14:31.515 Atomic Compare & Write Unit: 1 00:14:31.515 Fused Compare & Write: Supported 00:14:31.515 Scatter-Gather List 00:14:31.515 SGL Command Set: Supported 00:14:31.515 SGL Keyed: Supported 00:14:31.515 SGL Bit Bucket Descriptor: Not Supported 00:14:31.515 SGL Metadata Pointer: Not Supported 00:14:31.515 Oversized SGL: Not Supported 00:14:31.515 SGL Metadata Address: Not Supported 00:14:31.515 SGL Offset: Supported 00:14:31.515 Transport SGL Data Block: Not Supported 00:14:31.515 Replay Protected Memory Block: Not Supported 00:14:31.515 00:14:31.515 Firmware Slot Information 00:14:31.515 ========================= 00:14:31.515 Active slot: 0 00:14:31.515 00:14:31.515 00:14:31.515 Error Log 00:14:31.515 ========= 00:14:31.515 00:14:31.515 Active Namespaces 00:14:31.515 ================= 00:14:31.515 Discovery Log Page 00:14:31.515 ================== 00:14:31.515 Generation Counter: 2 00:14:31.515 Number of Records: 2 00:14:31.515 Record Format: 0 00:14:31.515 00:14:31.515 Discovery Log Entry 0 00:14:31.515 ---------------------- 00:14:31.515 Transport Type: 3 (TCP) 00:14:31.515 Address Family: 1 (IPv4) 00:14:31.515 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:31.515 Entry Flags: 00:14:31.515 Duplicate Returned Information: 1 00:14:31.515 Explicit Persistent Connection Support for Discovery: 1 00:14:31.515 Transport Requirements: 00:14:31.515 Secure Channel: Not Required 00:14:31.516 Port ID: 0 (0x0000) 00:14:31.516 Controller ID: 65535 (0xffff) 00:14:31.516 Admin Max SQ Size: 128 00:14:31.516 Transport Service Identifier: 4420 00:14:31.516 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:31.516 Transport Address: 10.0.0.2 00:14:31.516 Discovery Log Entry 1 00:14:31.516 ---------------------- 00:14:31.516 Transport Type: 3 (TCP) 00:14:31.516 Address Family: 1 (IPv4) 00:14:31.516 Subsystem Type: 2 (NVM Subsystem) 00:14:31.516 Entry Flags: 00:14:31.516 Duplicate Returned Information: 0 00:14:31.516 Explicit Persistent Connection Support for Discovery: 0 00:14:31.516 Transport Requirements: 00:14:31.516 Secure Channel: Not Required 00:14:31.516 Port ID: 0 (0x0000) 00:14:31.516 Controller ID: 65535 (0xffff) 00:14:31.516 Admin Max SQ Size: 128 00:14:31.516 Transport Service Identifier: 4420 00:14:31.516 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:31.516 Transport Address: 10.0.0.2 [2024-07-12 11:38:34.805138] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:31.516 [2024-07-12 11:38:34.805152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393940) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.516 [2024-07-12 11:38:34.805165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393ac0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.516 [2024-07-12 11:38:34.805175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393c40) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.516 [2024-07-12 11:38:34.805185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.516 [2024-07-12 11:38:34.805199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.805216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.516 [2024-07-12 11:38:34.805238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.516 [2024-07-12 11:38:34.805285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.516 [2024-07-12 11:38:34.805292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.516 [2024-07-12 11:38:34.805296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.805324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.516 [2024-07-12 11:38:34.805345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.516 [2024-07-12 11:38:34.805406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.516 [2024-07-12 11:38:34.805412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.516 [2024-07-12 11:38:34.805416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805426] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:31.516 [2024-07-12 11:38:34.805431] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:31.516 [2024-07-12 11:38:34.805441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.805457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.516 [2024-07-12 11:38:34.805474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.516 [2024-07-12 11:38:34.805519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.516 [2024-07-12 11:38:34.805525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.516 [2024-07-12 11:38:34.805529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.805561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.516 [2024-07-12 11:38:34.805590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.516 [2024-07-12 11:38:34.805640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.516 [2024-07-12 11:38:34.805646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.516 [2024-07-12 11:38:34.805650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.805681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.516 [2024-07-12 11:38:34.805700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.516 [2024-07-12 11:38:34.805748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.516 [2024-07-12 11:38:34.805755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.516 [2024-07-12 11:38:34.805759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.805789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.516 [2024-07-12 11:38:34.805806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.516 [2024-07-12 11:38:34.805850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.516 [2024-07-12 11:38:34.805857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.516 [2024-07-12 11:38:34.805861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805875] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.805891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.516 [2024-07-12 11:38:34.805908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.516 [2024-07-12 11:38:34.805949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.516 [2024-07-12 11:38:34.805956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.516 [2024-07-12 11:38:34.805960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.805975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.805983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.805990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.516 [2024-07-12 11:38:34.806007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.516 [2024-07-12 11:38:34.806051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.516 [2024-07-12 11:38:34.806058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.516 [2024-07-12 11:38:34.806062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.806066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.806076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.806081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.806085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.806092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.516 [2024-07-12 11:38:34.806108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.516 [2024-07-12 11:38:34.806153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.516 [2024-07-12 11:38:34.806160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.516 [2024-07-12 11:38:34.806164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.806168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.516 [2024-07-12 11:38:34.806178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.806183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.516 [2024-07-12 11:38:34.806187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.516 [2024-07-12 11:38:34.806194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.517 [2024-07-12 11:38:34.806211] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.517 [2024-07-12 11:38:34.806256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.517 [2024-07-12 11:38:34.806263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.517 [2024-07-12 11:38:34.806266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.806270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.517 [2024-07-12 11:38:34.806281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.806285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.806289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.517 [2024-07-12 11:38:34.806297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.517 [2024-07-12 11:38:34.806313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.517 [2024-07-12 11:38:34.806355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.517 [2024-07-12 11:38:34.806362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.517 [2024-07-12 11:38:34.806366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.806370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.517 [2024-07-12 11:38:34.806380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.806385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.806389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.517 [2024-07-12 11:38:34.806396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.517 [2024-07-12 11:38:34.806412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.517 [2024-07-12 11:38:34.806460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.517 [2024-07-12 11:38:34.806467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.517 [2024-07-12 11:38:34.806470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.806474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.517 [2024-07-12 11:38:34.806485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.806490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.806493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.517 [2024-07-12 11:38:34.806501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.517 [2024-07-12 11:38:34.806517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.517 [2024-07-12 11:38:34.806565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.517 [2024-07-12 11:38:34.806572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.517 [2024-07-12 11:38:34.806575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.810605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.517 [2024-07-12 11:38:34.810624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.810630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.810634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13522c0) 00:14:31.517 [2024-07-12 11:38:34.810643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.517 [2024-07-12 11:38:34.810670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1393dc0, cid 3, qid 0 00:14:31.517 [2024-07-12 11:38:34.810718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.517 [2024-07-12 11:38:34.810726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.517 [2024-07-12 11:38:34.810730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.517 [2024-07-12 11:38:34.810734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1393dc0) on tqpair=0x13522c0 00:14:31.517 [2024-07-12 11:38:34.810743] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:31.517 00:14:31.517 11:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:31.517 [2024-07-12 11:38:34.855126] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:31.517 [2024-07-12 11:38:34.855188] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74834 ] 00:14:31.780 [2024-07-12 11:38:34.996762] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:31.780 [2024-07-12 11:38:34.996842] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:31.780 [2024-07-12 11:38:34.996849] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:31.780 [2024-07-12 11:38:34.996861] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:31.780 [2024-07-12 11:38:34.996869] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:31.780 [2024-07-12 11:38:34.997009] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:31.780 [2024-07-12 11:38:34.997061] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xebc2c0 0 00:14:31.780 [2024-07-12 11:38:35.001597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:31.780 [2024-07-12 11:38:35.001620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:31.780 [2024-07-12 11:38:35.001626] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:31.780 [2024-07-12 11:38:35.001629] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:31.780 [2024-07-12 11:38:35.001679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.001686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.001691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.780 [2024-07-12 11:38:35.001705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:31.780 [2024-07-12 11:38:35.001735] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.780 [2024-07-12 11:38:35.009600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.780 [2024-07-12 11:38:35.009623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.780 [2024-07-12 11:38:35.009628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.009633] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.780 [2024-07-12 11:38:35.009644] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:31.780 [2024-07-12 11:38:35.009653] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:31.780 [2024-07-12 11:38:35.009660] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:31.780 [2024-07-12 11:38:35.009679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.009685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.009689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.780 [2024-07-12 11:38:35.009699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.780 [2024-07-12 11:38:35.009734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.780 [2024-07-12 11:38:35.009796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.780 [2024-07-12 11:38:35.009803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.780 [2024-07-12 11:38:35.009807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.009812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.780 [2024-07-12 11:38:35.009818] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:31.780 [2024-07-12 11:38:35.009826] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:31.780 [2024-07-12 11:38:35.009834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.009839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.009843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.780 [2024-07-12 11:38:35.009850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.780 [2024-07-12 11:38:35.009869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.780 [2024-07-12 11:38:35.010299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.780 [2024-07-12 11:38:35.010318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.780 [2024-07-12 11:38:35.010322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.010327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.780 [2024-07-12 11:38:35.010333] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:31.780 [2024-07-12 11:38:35.010343] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:31.780 [2024-07-12 11:38:35.010351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.010355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.010359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.780 [2024-07-12 11:38:35.010367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.780 [2024-07-12 11:38:35.010387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.780 [2024-07-12 11:38:35.010440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.780 [2024-07-12 11:38:35.010447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.780 [2024-07-12 11:38:35.010451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.780 [2024-07-12 11:38:35.010456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.780 [2024-07-12 11:38:35.010461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:31.781 [2024-07-12 11:38:35.010472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.010477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.010481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.010488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.781 [2024-07-12 11:38:35.010507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.781 [2024-07-12 11:38:35.010735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.781 [2024-07-12 11:38:35.010744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.781 [2024-07-12 11:38:35.010748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.010753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.781 [2024-07-12 11:38:35.010758] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:31.781 [2024-07-12 11:38:35.010764] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:31.781 [2024-07-12 11:38:35.010772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:31.781 [2024-07-12 11:38:35.010878] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:31.781 [2024-07-12 11:38:35.010883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:31.781 [2024-07-12 11:38:35.010893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.010898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.010902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.010910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.781 [2024-07-12 11:38:35.010931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.781 [2024-07-12 11:38:35.011310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.781 [2024-07-12 11:38:35.011325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.781 [2024-07-12 11:38:35.011330] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.011334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.781 [2024-07-12 11:38:35.011340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:31.781 [2024-07-12 11:38:35.011351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.011356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.011360] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.011368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.781 [2024-07-12 11:38:35.011398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.781 [2024-07-12 11:38:35.011450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.781 [2024-07-12 11:38:35.011457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.781 [2024-07-12 11:38:35.011461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.011465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.781 [2024-07-12 11:38:35.011470] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:31.781 [2024-07-12 11:38:35.011475] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:31.781 [2024-07-12 11:38:35.011484] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:31.781 [2024-07-12 11:38:35.011495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:31.781 [2024-07-12 11:38:35.011514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.011518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.011526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.781 [2024-07-12 11:38:35.011546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.781 [2024-07-12 11:38:35.012012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.781 [2024-07-12 11:38:35.012028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.781 [2024-07-12 11:38:35.012033] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012037] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebc2c0): datao=0, datal=4096, cccid=0 00:14:31.781 [2024-07-12 11:38:35.012043] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefd940) on tqpair(0xebc2c0): expected_datao=0, payload_size=4096 00:14:31.781 [2024-07-12 11:38:35.012048] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012066] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012079] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.781 [2024-07-12 11:38:35.012095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.781 [2024-07-12 11:38:35.012098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.781 [2024-07-12 11:38:35.012112] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:31.781 [2024-07-12 11:38:35.012117] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:31.781 [2024-07-12 11:38:35.012122] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:31.781 [2024-07-12 11:38:35.012127] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:31.781 [2024-07-12 11:38:35.012132] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:31.781 [2024-07-12 11:38:35.012137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:31.781 [2024-07-12 11:38:35.012147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:31.781 [2024-07-12 11:38:35.012155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.012172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:31.781 [2024-07-12 11:38:35.012194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.781 [2024-07-12 11:38:35.012602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.781 [2024-07-12 11:38:35.012615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.781 [2024-07-12 11:38:35.012619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.781 [2024-07-12 11:38:35.012632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012641] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.012648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.781 [2024-07-12 11:38:35.012655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.012669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.781 [2024-07-12 11:38:35.012676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.012690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.781 [2024-07-12 11:38:35.012696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.012710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.781 [2024-07-12 11:38:35.012715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:31.781 [2024-07-12 11:38:35.012730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:31.781 [2024-07-12 11:38:35.012738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.012742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebc2c0) 00:14:31.781 [2024-07-12 11:38:35.012749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.781 [2024-07-12 11:38:35.012771] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefd940, cid 0, qid 0 00:14:31.781 [2024-07-12 11:38:35.012779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdac0, cid 1, qid 0 00:14:31.781 [2024-07-12 11:38:35.012784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdc40, cid 2, qid 0 00:14:31.781 [2024-07-12 11:38:35.012789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.781 [2024-07-12 11:38:35.012794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdf40, cid 4, qid 0 00:14:31.781 [2024-07-12 11:38:35.013323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.781 [2024-07-12 11:38:35.013337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.781 [2024-07-12 11:38:35.013342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.013346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdf40) on tqpair=0xebc2c0 00:14:31.781 [2024-07-12 11:38:35.013352] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:31.781 [2024-07-12 11:38:35.013362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:31.781 [2024-07-12 11:38:35.013372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:31.781 [2024-07-12 11:38:35.013379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:31.781 [2024-07-12 11:38:35.013386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.013391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.781 [2024-07-12 11:38:35.013395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebc2c0) 00:14:31.782 [2024-07-12 11:38:35.013402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:31.782 [2024-07-12 11:38:35.013422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdf40, cid 4, qid 0 00:14:31.782 [2024-07-12 11:38:35.013478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.782 [2024-07-12 11:38:35.013485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.782 [2024-07-12 11:38:35.013489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.013493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdf40) on tqpair=0xebc2c0 00:14:31.782 [2024-07-12 11:38:35.013556] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.013567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.013575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.017606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebc2c0) 00:14:31.782 [2024-07-12 11:38:35.017617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.782 [2024-07-12 11:38:35.017648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdf40, cid 4, qid 0 00:14:31.782 [2024-07-12 11:38:35.017723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.782 [2024-07-12 11:38:35.017731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.782 [2024-07-12 11:38:35.017735] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.017739] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebc2c0): datao=0, datal=4096, cccid=4 00:14:31.782 [2024-07-12 11:38:35.017744] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefdf40) on tqpair(0xebc2c0): expected_datao=0, payload_size=4096 00:14:31.782 [2024-07-12 11:38:35.017749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.017757] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.017761] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.017770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.782 [2024-07-12 11:38:35.017776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.782 [2024-07-12 11:38:35.017780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.017784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdf40) on tqpair=0xebc2c0 00:14:31.782 [2024-07-12 11:38:35.017803] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:31.782 [2024-07-12 11:38:35.017816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.017827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.017836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.017840] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebc2c0) 00:14:31.782 [2024-07-12 11:38:35.017848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.782 [2024-07-12 11:38:35.017870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdf40, cid 4, qid 0 00:14:31.782 [2024-07-12 11:38:35.018264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.782 [2024-07-12 11:38:35.018280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.782 [2024-07-12 11:38:35.018284] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018288] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebc2c0): datao=0, datal=4096, cccid=4 00:14:31.782 [2024-07-12 11:38:35.018294] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefdf40) on tqpair(0xebc2c0): expected_datao=0, payload_size=4096 00:14:31.782 [2024-07-12 11:38:35.018299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018306] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018311] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.782 [2024-07-12 11:38:35.018326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.782 [2024-07-12 11:38:35.018330] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdf40) on tqpair=0xebc2c0 00:14:31.782 [2024-07-12 11:38:35.018350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.018362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.018371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018375] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebc2c0) 00:14:31.782 [2024-07-12 11:38:35.018383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.782 [2024-07-12 11:38:35.018404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdf40, cid 4, qid 0 00:14:31.782 [2024-07-12 11:38:35.018564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.782 [2024-07-12 11:38:35.018571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.782 [2024-07-12 11:38:35.018574] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018591] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebc2c0): datao=0, datal=4096, cccid=4 00:14:31.782 [2024-07-12 11:38:35.018597] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefdf40) on tqpair(0xebc2c0): expected_datao=0, payload_size=4096 00:14:31.782 [2024-07-12 11:38:35.018602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018610] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018614] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.782 [2024-07-12 11:38:35.018707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.782 [2024-07-12 11:38:35.018711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdf40) on tqpair=0xebc2c0 00:14:31.782 [2024-07-12 11:38:35.018735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.018744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.018756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.018762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.018768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.018774] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.018780] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:31.782 [2024-07-12 11:38:35.018785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:31.782 [2024-07-12 11:38:35.018791] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:31.782 [2024-07-12 11:38:35.018808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebc2c0) 00:14:31.782 [2024-07-12 11:38:35.018821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.782 [2024-07-12 11:38:35.018828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.018836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebc2c0) 00:14:31.782 [2024-07-12 11:38:35.018843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.782 [2024-07-12 11:38:35.018869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdf40, cid 4, qid 0 00:14:31.782 [2024-07-12 11:38:35.018877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefe0c0, cid 5, qid 0 00:14:31.782 [2024-07-12 11:38:35.019449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.782 [2024-07-12 11:38:35.019465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.782 [2024-07-12 11:38:35.019470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.019474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdf40) on tqpair=0xebc2c0 00:14:31.782 [2024-07-12 11:38:35.019481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.782 [2024-07-12 11:38:35.019487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.782 [2024-07-12 11:38:35.019491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.019495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefe0c0) on tqpair=0xebc2c0 00:14:31.782 [2024-07-12 11:38:35.019506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.019511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebc2c0) 00:14:31.782 [2024-07-12 11:38:35.019519] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.782 [2024-07-12 11:38:35.019538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefe0c0, cid 5, qid 0 00:14:31.782 [2024-07-12 11:38:35.019709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.782 [2024-07-12 11:38:35.019719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.782 [2024-07-12 11:38:35.019722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.019727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefe0c0) on tqpair=0xebc2c0 00:14:31.782 [2024-07-12 11:38:35.019738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.019742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebc2c0) 00:14:31.782 [2024-07-12 11:38:35.019750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.782 [2024-07-12 11:38:35.019769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefe0c0, cid 5, qid 0 00:14:31.782 [2024-07-12 11:38:35.020175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.782 [2024-07-12 11:38:35.020192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.782 [2024-07-12 11:38:35.020197] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.020201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefe0c0) on tqpair=0xebc2c0 00:14:31.782 [2024-07-12 11:38:35.020213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.782 [2024-07-12 11:38:35.020218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebc2c0) 00:14:31.782 [2024-07-12 11:38:35.020226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.783 [2024-07-12 11:38:35.020248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefe0c0, cid 5, qid 0 00:14:31.783 [2024-07-12 11:38:35.020300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.783 [2024-07-12 11:38:35.020307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.783 [2024-07-12 11:38:35.020311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.020315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefe0c0) on tqpair=0xebc2c0 00:14:31.783 [2024-07-12 11:38:35.020336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.020342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xebc2c0) 00:14:31.783 [2024-07-12 11:38:35.020350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.783 [2024-07-12 11:38:35.020358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.020362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xebc2c0) 00:14:31.783 [2024-07-12 11:38:35.020369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.783 [2024-07-12 11:38:35.020376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.020381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xebc2c0) 00:14:31.783 [2024-07-12 11:38:35.020387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.783 [2024-07-12 11:38:35.020399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.020403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xebc2c0) 00:14:31.783 [2024-07-12 11:38:35.020410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.783 [2024-07-12 11:38:35.020431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefe0c0, cid 5, qid 0 00:14:31.783 [2024-07-12 11:38:35.020438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefdf40, cid 4, qid 0 00:14:31.783 [2024-07-12 11:38:35.020444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefe240, cid 6, qid 0 00:14:31.783 [2024-07-12 11:38:35.020449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefe3c0, cid 7, qid 0 00:14:31.783 [2024-07-12 11:38:35.021129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.783 [2024-07-12 11:38:35.021145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.783 [2024-07-12 11:38:35.021149] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021153] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebc2c0): datao=0, datal=8192, cccid=5 00:14:31.783 [2024-07-12 11:38:35.021159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefe0c0) on tqpair(0xebc2c0): expected_datao=0, payload_size=8192 00:14:31.783 [2024-07-12 11:38:35.021163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021184] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.783 [2024-07-12 11:38:35.021203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.783 [2024-07-12 11:38:35.021206] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021211] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebc2c0): datao=0, datal=512, cccid=4 00:14:31.783 [2024-07-12 11:38:35.021215] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefdf40) on tqpair(0xebc2c0): expected_datao=0, payload_size=512 00:14:31.783 [2024-07-12 11:38:35.021220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021227] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021231] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.783 [2024-07-12 11:38:35.021243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.783 [2024-07-12 11:38:35.021246] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021250] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebc2c0): datao=0, datal=512, cccid=6 00:14:31.783 [2024-07-12 11:38:35.021255] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefe240) on tqpair(0xebc2c0): expected_datao=0, payload_size=512 00:14:31.783 [2024-07-12 11:38:35.021259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021266] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021270] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:31.783 [2024-07-12 11:38:35.021281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:31.783 [2024-07-12 11:38:35.021285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021289] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xebc2c0): datao=0, datal=4096, cccid=7 00:14:31.783 [2024-07-12 11:38:35.021293] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xefe3c0) on tqpair(0xebc2c0): expected_datao=0, payload_size=4096 00:14:31.783 [2024-07-12 11:38:35.021298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021305] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021309] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.783 [2024-07-12 11:38:35.021323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.783 [2024-07-12 11:38:35.021327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021331] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefe0c0) on tqpair=0xebc2c0 00:14:31.783 [2024-07-12 11:38:35.021350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.783 [2024-07-12 11:38:35.021357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.783 [2024-07-12 11:38:35.021360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdf40) on tqpair=0xebc2c0 00:14:31.783 [2024-07-12 11:38:35.021378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.783 [2024-07-12 11:38:35.021384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.783 [2024-07-12 11:38:35.021388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefe240) on tqpair=0xebc2c0 00:14:31.783 [2024-07-12 11:38:35.021399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.783 [2024-07-12 11:38:35.021405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.783 [2024-07-12 11:38:35.021409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.783 [2024-07-12 11:38:35.021413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefe3c0) on tqpair=0xebc2c0 00:14:31.783 ===================================================== 00:14:31.783 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.783 ===================================================== 00:14:31.783 Controller Capabilities/Features 00:14:31.783 ================================ 00:14:31.783 Vendor ID: 8086 00:14:31.783 Subsystem Vendor ID: 8086 00:14:31.783 Serial Number: SPDK00000000000001 00:14:31.783 Model Number: SPDK bdev Controller 00:14:31.783 Firmware Version: 24.09 00:14:31.783 Recommended Arb Burst: 6 00:14:31.783 IEEE OUI Identifier: e4 d2 5c 00:14:31.783 Multi-path I/O 00:14:31.783 May have multiple subsystem ports: Yes 00:14:31.783 May have multiple controllers: Yes 00:14:31.783 Associated with SR-IOV VF: No 00:14:31.783 Max Data Transfer Size: 131072 00:14:31.783 Max Number of Namespaces: 32 00:14:31.783 Max Number of I/O Queues: 127 00:14:31.783 NVMe Specification Version (VS): 1.3 00:14:31.783 NVMe Specification Version (Identify): 1.3 00:14:31.783 Maximum Queue Entries: 128 00:14:31.783 Contiguous Queues Required: Yes 00:14:31.783 Arbitration Mechanisms Supported 00:14:31.783 Weighted Round Robin: Not Supported 00:14:31.783 Vendor Specific: Not Supported 00:14:31.783 Reset Timeout: 15000 ms 00:14:31.783 Doorbell Stride: 4 bytes 00:14:31.783 NVM Subsystem Reset: Not Supported 00:14:31.783 Command Sets Supported 00:14:31.783 NVM Command Set: Supported 00:14:31.783 Boot Partition: Not Supported 00:14:31.783 Memory Page Size Minimum: 4096 bytes 00:14:31.783 Memory Page Size Maximum: 4096 bytes 00:14:31.783 Persistent Memory Region: Not Supported 00:14:31.783 Optional Asynchronous Events Supported 00:14:31.783 Namespace Attribute Notices: Supported 00:14:31.783 Firmware Activation Notices: Not Supported 00:14:31.783 ANA Change Notices: Not Supported 00:14:31.783 PLE Aggregate Log Change Notices: Not Supported 00:14:31.783 LBA Status Info Alert Notices: Not Supported 00:14:31.783 EGE Aggregate Log Change Notices: Not Supported 00:14:31.783 Normal NVM Subsystem Shutdown event: Not Supported 00:14:31.783 Zone Descriptor Change Notices: Not Supported 00:14:31.783 Discovery Log Change Notices: Not Supported 00:14:31.783 Controller Attributes 00:14:31.783 128-bit Host Identifier: Supported 00:14:31.783 Non-Operational Permissive Mode: Not Supported 00:14:31.783 NVM Sets: Not Supported 00:14:31.783 Read Recovery Levels: Not Supported 00:14:31.783 Endurance Groups: Not Supported 00:14:31.783 Predictable Latency Mode: Not Supported 00:14:31.783 Traffic Based Keep ALive: Not Supported 00:14:31.783 Namespace Granularity: Not Supported 00:14:31.783 SQ Associations: Not Supported 00:14:31.783 UUID List: Not Supported 00:14:31.783 Multi-Domain Subsystem: Not Supported 00:14:31.783 Fixed Capacity Management: Not Supported 00:14:31.783 Variable Capacity Management: Not Supported 00:14:31.783 Delete Endurance Group: Not Supported 00:14:31.783 Delete NVM Set: Not Supported 00:14:31.783 Extended LBA Formats Supported: Not Supported 00:14:31.783 Flexible Data Placement Supported: Not Supported 00:14:31.783 00:14:31.783 Controller Memory Buffer Support 00:14:31.783 ================================ 00:14:31.783 Supported: No 00:14:31.783 00:14:31.783 Persistent Memory Region Support 00:14:31.783 ================================ 00:14:31.783 Supported: No 00:14:31.783 00:14:31.783 Admin Command Set Attributes 00:14:31.783 ============================ 00:14:31.783 Security Send/Receive: Not Supported 00:14:31.783 Format NVM: Not Supported 00:14:31.783 Firmware Activate/Download: Not Supported 00:14:31.783 Namespace Management: Not Supported 00:14:31.783 Device Self-Test: Not Supported 00:14:31.783 Directives: Not Supported 00:14:31.784 NVMe-MI: Not Supported 00:14:31.784 Virtualization Management: Not Supported 00:14:31.784 Doorbell Buffer Config: Not Supported 00:14:31.784 Get LBA Status Capability: Not Supported 00:14:31.784 Command & Feature Lockdown Capability: Not Supported 00:14:31.784 Abort Command Limit: 4 00:14:31.784 Async Event Request Limit: 4 00:14:31.784 Number of Firmware Slots: N/A 00:14:31.784 Firmware Slot 1 Read-Only: N/A 00:14:31.784 Firmware Activation Without Reset: N/A 00:14:31.784 Multiple Update Detection Support: N/A 00:14:31.784 Firmware Update Granularity: No Information Provided 00:14:31.784 Per-Namespace SMART Log: No 00:14:31.784 Asymmetric Namespace Access Log Page: Not Supported 00:14:31.784 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:31.784 Command Effects Log Page: Supported 00:14:31.784 Get Log Page Extended Data: Supported 00:14:31.784 Telemetry Log Pages: Not Supported 00:14:31.784 Persistent Event Log Pages: Not Supported 00:14:31.784 Supported Log Pages Log Page: May Support 00:14:31.784 Commands Supported & Effects Log Page: Not Supported 00:14:31.784 Feature Identifiers & Effects Log Page:May Support 00:14:31.784 NVMe-MI Commands & Effects Log Page: May Support 00:14:31.784 Data Area 4 for Telemetry Log: Not Supported 00:14:31.784 Error Log Page Entries Supported: 128 00:14:31.784 Keep Alive: Supported 00:14:31.784 Keep Alive Granularity: 10000 ms 00:14:31.784 00:14:31.784 NVM Command Set Attributes 00:14:31.784 ========================== 00:14:31.784 Submission Queue Entry Size 00:14:31.784 Max: 64 00:14:31.784 Min: 64 00:14:31.784 Completion Queue Entry Size 00:14:31.784 Max: 16 00:14:31.784 Min: 16 00:14:31.784 Number of Namespaces: 32 00:14:31.784 Compare Command: Supported 00:14:31.784 Write Uncorrectable Command: Not Supported 00:14:31.784 Dataset Management Command: Supported 00:14:31.784 Write Zeroes Command: Supported 00:14:31.784 Set Features Save Field: Not Supported 00:14:31.784 Reservations: Supported 00:14:31.784 Timestamp: Not Supported 00:14:31.784 Copy: Supported 00:14:31.784 Volatile Write Cache: Present 00:14:31.784 Atomic Write Unit (Normal): 1 00:14:31.784 Atomic Write Unit (PFail): 1 00:14:31.784 Atomic Compare & Write Unit: 1 00:14:31.784 Fused Compare & Write: Supported 00:14:31.784 Scatter-Gather List 00:14:31.784 SGL Command Set: Supported 00:14:31.784 SGL Keyed: Supported 00:14:31.784 SGL Bit Bucket Descriptor: Not Supported 00:14:31.784 SGL Metadata Pointer: Not Supported 00:14:31.784 Oversized SGL: Not Supported 00:14:31.784 SGL Metadata Address: Not Supported 00:14:31.784 SGL Offset: Supported 00:14:31.784 Transport SGL Data Block: Not Supported 00:14:31.784 Replay Protected Memory Block: Not Supported 00:14:31.784 00:14:31.784 Firmware Slot Information 00:14:31.784 ========================= 00:14:31.784 Active slot: 1 00:14:31.784 Slot 1 Firmware Revision: 24.09 00:14:31.784 00:14:31.784 00:14:31.784 Commands Supported and Effects 00:14:31.784 ============================== 00:14:31.784 Admin Commands 00:14:31.784 -------------- 00:14:31.784 Get Log Page (02h): Supported 00:14:31.784 Identify (06h): Supported 00:14:31.784 Abort (08h): Supported 00:14:31.784 Set Features (09h): Supported 00:14:31.784 Get Features (0Ah): Supported 00:14:31.784 Asynchronous Event Request (0Ch): Supported 00:14:31.784 Keep Alive (18h): Supported 00:14:31.784 I/O Commands 00:14:31.784 ------------ 00:14:31.784 Flush (00h): Supported LBA-Change 00:14:31.784 Write (01h): Supported LBA-Change 00:14:31.784 Read (02h): Supported 00:14:31.784 Compare (05h): Supported 00:14:31.784 Write Zeroes (08h): Supported LBA-Change 00:14:31.784 Dataset Management (09h): Supported LBA-Change 00:14:31.784 Copy (19h): Supported LBA-Change 00:14:31.784 00:14:31.784 Error Log 00:14:31.784 ========= 00:14:31.784 00:14:31.784 Arbitration 00:14:31.784 =========== 00:14:31.784 Arbitration Burst: 1 00:14:31.784 00:14:31.784 Power Management 00:14:31.784 ================ 00:14:31.784 Number of Power States: 1 00:14:31.784 Current Power State: Power State #0 00:14:31.784 Power State #0: 00:14:31.784 Max Power: 0.00 W 00:14:31.784 Non-Operational State: Operational 00:14:31.784 Entry Latency: Not Reported 00:14:31.784 Exit Latency: Not Reported 00:14:31.784 Relative Read Throughput: 0 00:14:31.784 Relative Read Latency: 0 00:14:31.784 Relative Write Throughput: 0 00:14:31.784 Relative Write Latency: 0 00:14:31.784 Idle Power: Not Reported 00:14:31.784 Active Power: Not Reported 00:14:31.784 Non-Operational Permissive Mode: Not Supported 00:14:31.784 00:14:31.784 Health Information 00:14:31.784 ================== 00:14:31.784 Critical Warnings: 00:14:31.784 Available Spare Space: OK 00:14:31.784 Temperature: OK 00:14:31.784 Device Reliability: OK 00:14:31.784 Read Only: No 00:14:31.784 Volatile Memory Backup: OK 00:14:31.784 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:31.784 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:31.784 Available Spare: 0% 00:14:31.784 Available Spare Threshold: 0% 00:14:31.784 Life Percentage Used:[2024-07-12 11:38:35.021522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.784 [2024-07-12 11:38:35.021530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xebc2c0) 00:14:31.784 [2024-07-12 11:38:35.021538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.784 [2024-07-12 11:38:35.021563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefe3c0, cid 7, qid 0 00:14:31.784 [2024-07-12 11:38:35.025598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.784 [2024-07-12 11:38:35.025618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.784 [2024-07-12 11:38:35.025623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.784 [2024-07-12 11:38:35.025627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefe3c0) on tqpair=0xebc2c0 00:14:31.784 [2024-07-12 11:38:35.025670] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:31.784 [2024-07-12 11:38:35.025683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefd940) on tqpair=0xebc2c0 00:14:31.784 [2024-07-12 11:38:35.025692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.784 [2024-07-12 11:38:35.025698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdac0) on tqpair=0xebc2c0 00:14:31.784 [2024-07-12 11:38:35.025703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.784 [2024-07-12 11:38:35.025708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefdc40) on tqpair=0xebc2c0 00:14:31.784 [2024-07-12 11:38:35.025713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.784 [2024-07-12 11:38:35.025718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.784 [2024-07-12 11:38:35.025724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.784 [2024-07-12 11:38:35.025735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.784 [2024-07-12 11:38:35.025739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.784 [2024-07-12 11:38:35.025744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.784 [2024-07-12 11:38:35.025752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.784 [2024-07-12 11:38:35.025779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.784 [2024-07-12 11:38:35.026158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.784 [2024-07-12 11:38:35.026173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.026177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.026190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.026206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.026231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.026306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.026313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.026316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.026326] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:31.785 [2024-07-12 11:38:35.026331] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:31.785 [2024-07-12 11:38:35.026342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.026358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.026376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.026694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.026709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.026714] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.026731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.026747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.026768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.026926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.026940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.026945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.026960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.026969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.026977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.026996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.027303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.027317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.027321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.027326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.027337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.027342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.027346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.027353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.027372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.027616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.027628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.027632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.027637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.027649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.027654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.027658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.027666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.027686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.027934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.027948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.027953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.027957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.027969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.027973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.027977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.027985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.028004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.028256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.028267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.028272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.028276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.028287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.028292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.028296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.028303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.028321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.028466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.028477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.028481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.028485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.028496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.028501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.028505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.028513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.028531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.028883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.028897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.028902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.028906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.028918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.028923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.028927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.028934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.028954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.029218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.029231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.029236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.029240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.029252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.029257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.029260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.029268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.029287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.029499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.029512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.029517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.029521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.029532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.029537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.029541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.785 [2024-07-12 11:38:35.029549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.785 [2024-07-12 11:38:35.029567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.785 [2024-07-12 11:38:35.030615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.785 [2024-07-12 11:38:35.030635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.785 [2024-07-12 11:38:35.030641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.030645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.785 [2024-07-12 11:38:35.030659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:31.785 [2024-07-12 11:38:35.030665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:31.786 [2024-07-12 11:38:35.030669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xebc2c0) 00:14:31.786 [2024-07-12 11:38:35.030677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:31.786 [2024-07-12 11:38:35.030701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xefddc0, cid 3, qid 0 00:14:31.786 [2024-07-12 11:38:35.030973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:31.786 [2024-07-12 11:38:35.030987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:31.786 [2024-07-12 11:38:35.030991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:31.786 [2024-07-12 11:38:35.030996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xefddc0) on tqpair=0xebc2c0 00:14:31.786 [2024-07-12 11:38:35.031005] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:14:31.786 0% 00:14:31.786 Data Units Read: 0 00:14:31.786 Data Units Written: 0 00:14:31.786 Host Read Commands: 0 00:14:31.786 Host Write Commands: 0 00:14:31.786 Controller Busy Time: 0 minutes 00:14:31.786 Power Cycles: 0 00:14:31.786 Power On Hours: 0 hours 00:14:31.786 Unsafe Shutdowns: 0 00:14:31.786 Unrecoverable Media Errors: 0 00:14:31.786 Lifetime Error Log Entries: 0 00:14:31.786 Warning Temperature Time: 0 minutes 00:14:31.786 Critical Temperature Time: 0 minutes 00:14:31.786 00:14:31.786 Number of Queues 00:14:31.786 ================ 00:14:31.786 Number of I/O Submission Queues: 127 00:14:31.786 Number of I/O Completion Queues: 127 00:14:31.786 00:14:31.786 Active Namespaces 00:14:31.786 ================= 00:14:31.786 Namespace ID:1 00:14:31.786 Error Recovery Timeout: Unlimited 00:14:31.786 Command Set Identifier: NVM (00h) 00:14:31.786 Deallocate: Supported 00:14:31.786 Deallocated/Unwritten Error: Not Supported 00:14:31.786 Deallocated Read Value: Unknown 00:14:31.786 Deallocate in Write Zeroes: Not Supported 00:14:31.786 Deallocated Guard Field: 0xFFFF 00:14:31.786 Flush: Supported 00:14:31.786 Reservation: Supported 00:14:31.786 Namespace Sharing Capabilities: Multiple Controllers 00:14:31.786 Size (in LBAs): 131072 (0GiB) 00:14:31.786 Capacity (in LBAs): 131072 (0GiB) 00:14:31.786 Utilization (in LBAs): 131072 (0GiB) 00:14:31.786 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:31.786 EUI64: ABCDEF0123456789 00:14:31.786 UUID: dec4b29f-cd8b-4593-aa38-d973c50602bb 00:14:31.786 Thin Provisioning: Not Supported 00:14:31.786 Per-NS Atomic Units: Yes 00:14:31.786 Atomic Boundary Size (Normal): 0 00:14:31.786 Atomic Boundary Size (PFail): 0 00:14:31.786 Atomic Boundary Offset: 0 00:14:31.786 Maximum Single Source Range Length: 65535 00:14:31.786 Maximum Copy Length: 65535 00:14:31.786 Maximum Source Range Count: 1 00:14:31.786 NGUID/EUI64 Never Reused: No 00:14:31.786 Namespace Write Protected: No 00:14:31.786 Number of LBA Formats: 1 00:14:31.786 Current LBA Format: LBA Format #00 00:14:31.786 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:31.786 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.786 rmmod nvme_tcp 00:14:31.786 rmmod nvme_fabrics 00:14:31.786 rmmod nvme_keyring 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74797 ']' 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74797 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74797 ']' 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74797 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74797 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74797' 00:14:31.786 killing process with pid 74797 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74797 00:14:31.786 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74797 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:32.045 00:14:32.045 real 0m2.500s 00:14:32.045 user 0m6.868s 00:14:32.045 sys 0m0.644s 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.045 11:38:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:32.045 ************************************ 00:14:32.045 END TEST nvmf_identify 00:14:32.045 ************************************ 00:14:32.304 11:38:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:32.304 11:38:35 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:32.304 11:38:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:32.304 11:38:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.304 11:38:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.304 ************************************ 00:14:32.304 START TEST nvmf_perf 00:14:32.304 ************************************ 00:14:32.304 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:32.304 * Looking for test storage... 00:14:32.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:32.304 11:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:32.304 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:32.304 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.304 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.304 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.304 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:32.305 Cannot find device "nvmf_tgt_br" 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:32.305 Cannot find device "nvmf_tgt_br2" 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:32.305 Cannot find device "nvmf_tgt_br" 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:32.305 Cannot find device "nvmf_tgt_br2" 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:32.305 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:32.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:32.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:32.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:14:32.564 00:14:32.564 --- 10.0.0.2 ping statistics --- 00:14:32.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.564 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:32.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:32.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:14:32.564 00:14:32.564 --- 10.0.0.3 ping statistics --- 00:14:32.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.564 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:32.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:32.564 00:14:32.564 --- 10.0.0.1 ping statistics --- 00:14:32.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.564 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74999 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74999 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 74999 ']' 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.564 11:38:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:32.823 [2024-07-12 11:38:36.030621] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:32.823 [2024-07-12 11:38:36.030711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.823 [2024-07-12 11:38:36.166887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.082 [2024-07-12 11:38:36.282607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.082 [2024-07-12 11:38:36.282665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.082 [2024-07-12 11:38:36.282677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.082 [2024-07-12 11:38:36.282685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.082 [2024-07-12 11:38:36.282692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.082 [2024-07-12 11:38:36.282801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.082 [2024-07-12 11:38:36.283010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.082 [2024-07-12 11:38:36.283011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.082 [2024-07-12 11:38:36.283633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.082 [2024-07-12 11:38:36.336979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:33.650 11:38:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.650 11:38:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:14:33.650 11:38:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.650 11:38:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.650 11:38:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:33.650 11:38:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.650 11:38:37 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:33.650 11:38:37 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:34.217 11:38:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:34.217 11:38:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:34.476 11:38:37 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:34.476 11:38:37 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:34.735 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:34.735 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:34.735 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:34.735 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:34.735 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:34.994 [2024-07-12 11:38:38.306421] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.994 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:35.253 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:35.253 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:35.512 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:35.512 11:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:35.771 11:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.030 [2024-07-12 11:38:39.299547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.030 11:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.289 11:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:36.289 11:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:36.289 11:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:36.289 11:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:37.665 Initializing NVMe Controllers 00:14:37.665 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:37.665 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:37.665 Initialization complete. Launching workers. 00:14:37.665 ======================================================== 00:14:37.665 Latency(us) 00:14:37.665 Device Information : IOPS MiB/s Average min max 00:14:37.665 PCIE (0000:00:10.0) NSID 1 from core 0: 23971.07 93.64 1334.94 392.74 6787.44 00:14:37.665 ======================================================== 00:14:37.665 Total : 23971.07 93.64 1334.94 392.74 6787.44 00:14:37.665 00:14:37.665 11:38:40 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:38.599 Initializing NVMe Controllers 00:14:38.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:38.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:38.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:38.599 Initialization complete. Launching workers. 00:14:38.599 ======================================================== 00:14:38.599 Latency(us) 00:14:38.599 Device Information : IOPS MiB/s Average min max 00:14:38.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3737.95 14.60 267.22 106.01 6139.71 00:14:38.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8120.34 5010.66 12021.95 00:14:38.599 ======================================================== 00:14:38.599 Total : 3861.95 15.09 519.37 106.01 12021.95 00:14:38.599 00:14:38.858 11:38:42 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:40.230 Initializing NVMe Controllers 00:14:40.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:40.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:40.230 Initialization complete. Launching workers. 00:14:40.230 ======================================================== 00:14:40.230 Latency(us) 00:14:40.230 Device Information : IOPS MiB/s Average min max 00:14:40.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8724.89 34.08 3670.33 603.91 7595.60 00:14:40.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.49 15.75 7988.27 6494.85 9455.06 00:14:40.230 ======================================================== 00:14:40.230 Total : 12756.38 49.83 5034.96 603.91 9455.06 00:14:40.230 00:14:40.230 11:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:40.230 11:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:42.764 Initializing NVMe Controllers 00:14:42.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.764 Controller IO queue size 128, less than required. 00:14:42.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.764 Controller IO queue size 128, less than required. 00:14:42.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:42.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:42.764 Initialization complete. Launching workers. 00:14:42.764 ======================================================== 00:14:42.764 Latency(us) 00:14:42.764 Device Information : IOPS MiB/s Average min max 00:14:42.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1547.76 386.94 83898.45 40138.37 166583.93 00:14:42.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.28 159.57 206150.82 90263.75 322000.20 00:14:42.764 ======================================================== 00:14:42.764 Total : 2186.04 546.51 119593.80 40138.37 322000.20 00:14:42.764 00:14:42.764 11:38:45 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:42.764 Initializing NVMe Controllers 00:14:42.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.764 Controller IO queue size 128, less than required. 00:14:42.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.764 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:42.764 Controller IO queue size 128, less than required. 00:14:42.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:42.764 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:42.764 WARNING: Some requested NVMe devices were skipped 00:14:42.764 No valid NVMe controllers or AIO or URING devices found 00:14:42.764 11:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:45.296 Initializing NVMe Controllers 00:14:45.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.296 Controller IO queue size 128, less than required. 00:14:45.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:45.296 Controller IO queue size 128, less than required. 00:14:45.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:45.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:45.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:45.296 Initialization complete. Launching workers. 00:14:45.296 00:14:45.296 ==================== 00:14:45.296 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:45.296 TCP transport: 00:14:45.296 polls: 9801 00:14:45.296 idle_polls: 5368 00:14:45.296 sock_completions: 4433 00:14:45.296 nvme_completions: 6561 00:14:45.296 submitted_requests: 9764 00:14:45.296 queued_requests: 1 00:14:45.296 00:14:45.296 ==================== 00:14:45.296 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:45.296 TCP transport: 00:14:45.296 polls: 12307 00:14:45.296 idle_polls: 8458 00:14:45.296 sock_completions: 3849 00:14:45.296 nvme_completions: 6527 00:14:45.296 submitted_requests: 9790 00:14:45.296 queued_requests: 1 00:14:45.296 ======================================================== 00:14:45.296 Latency(us) 00:14:45.296 Device Information : IOPS MiB/s Average min max 00:14:45.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1636.18 409.04 79596.19 43170.98 131471.24 00:14:45.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1627.70 406.92 79308.94 36136.18 133546.15 00:14:45.296 ======================================================== 00:14:45.296 Total : 3263.88 815.97 79452.94 36136.18 133546.15 00:14:45.296 00:14:45.296 11:38:48 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:45.296 11:38:48 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.555 11:38:48 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:45.555 11:38:48 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:45.555 11:38:48 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:45.555 11:38:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.555 11:38:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:45.555 11:38:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.555 11:38:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:45.555 11:38:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.555 11:38:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.555 rmmod nvme_tcp 00:14:45.555 rmmod nvme_fabrics 00:14:45.555 rmmod nvme_keyring 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74999 ']' 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74999 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 74999 ']' 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 74999 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74999 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.814 killing process with pid 74999 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74999' 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 74999 00:14:45.814 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 74999 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:46.381 ************************************ 00:14:46.381 END TEST nvmf_perf 00:14:46.381 ************************************ 00:14:46.381 00:14:46.381 real 0m14.254s 00:14:46.381 user 0m52.586s 00:14:46.381 sys 0m4.080s 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.381 11:38:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:46.381 11:38:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:46.381 11:38:49 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:46.381 11:38:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:46.381 11:38:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.381 11:38:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.641 ************************************ 00:14:46.641 START TEST nvmf_fio_host 00:14:46.641 ************************************ 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:46.641 * Looking for test storage... 00:14:46.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.641 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:46.642 Cannot find device "nvmf_tgt_br" 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.642 Cannot find device "nvmf_tgt_br2" 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:46.642 Cannot find device "nvmf_tgt_br" 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:46.642 11:38:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:46.642 Cannot find device "nvmf_tgt_br2" 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.642 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:46.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:14:46.901 00:14:46.901 --- 10.0.0.2 ping statistics --- 00:14:46.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.901 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:46.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.901 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:46.901 00:14:46.901 --- 10.0.0.3 ping statistics --- 00:14:46.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.901 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:46.901 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:46.901 00:14:46.902 --- 10.0.0.1 ping statistics --- 00:14:46.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.902 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75406 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75406 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75406 ']' 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.902 11:38:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:46.902 [2024-07-12 11:38:50.308897] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:46.902 [2024-07-12 11:38:50.308975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.160 [2024-07-12 11:38:50.446347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.160 [2024-07-12 11:38:50.565185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.160 [2024-07-12 11:38:50.565504] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.160 [2024-07-12 11:38:50.565768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.160 [2024-07-12 11:38:50.565912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.160 [2024-07-12 11:38:50.566063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.160 [2024-07-12 11:38:50.566313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.160 [2024-07-12 11:38:50.566377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.160 [2024-07-12 11:38:50.566487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.160 [2024-07-12 11:38:50.566496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.418 [2024-07-12 11:38:50.622817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:47.984 11:38:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.984 11:38:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:14:47.984 11:38:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:48.242 [2024-07-12 11:38:51.628445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.242 11:38:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:48.242 11:38:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.242 11:38:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:48.499 11:38:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:48.758 Malloc1 00:14:48.758 11:38:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:49.016 11:38:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.275 11:38:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.534 [2024-07-12 11:38:52.819367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.534 11:38:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:49.792 11:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:49.792 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:49.792 fio-3.35 00:14:49.792 Starting 1 thread 00:14:52.322 00:14:52.322 test: (groupid=0, jobs=1): err= 0: pid=75489: Fri Jul 12 11:38:55 2024 00:14:52.323 read: IOPS=8961, BW=35.0MiB/s (36.7MB/s)(70.2MiB/2006msec) 00:14:52.323 slat (usec): min=2, max=310, avg= 2.55, stdev= 3.05 00:14:52.323 clat (usec): min=2359, max=15075, avg=7415.24, stdev=805.17 00:14:52.323 lat (usec): min=2401, max=15077, avg=7417.78, stdev=805.03 00:14:52.323 clat percentiles (usec): 00:14:52.323 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6980], 00:14:52.323 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7308], 60.00th=[ 7439], 00:14:52.323 | 70.00th=[ 7570], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8291], 00:14:52.323 | 99.00th=[11076], 99.50th=[12911], 99.90th=[14222], 99.95th=[15008], 00:14:52.323 | 99.99th=[15008] 00:14:52.323 bw ( KiB/s): min=33640, max=36840, per=99.96%, avg=35832.00, stdev=1475.60, samples=4 00:14:52.323 iops : min= 8410, max= 9210, avg=8957.50, stdev=368.60, samples=4 00:14:52.323 write: IOPS=8982, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2006msec); 0 zone resets 00:14:52.323 slat (usec): min=2, max=227, avg= 2.60, stdev= 1.94 00:14:52.323 clat (usec): min=2210, max=13946, avg=6785.65, stdev=737.17 00:14:52.323 lat (usec): min=2223, max=13948, avg=6788.25, stdev=737.13 00:14:52.323 clat percentiles (usec): 00:14:52.323 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6390], 00:14:52.323 | 30.00th=[ 6521], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6783], 00:14:52.323 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7308], 95.00th=[ 7635], 00:14:52.323 | 99.00th=[10159], 99.50th=[11731], 99.90th=[13042], 99.95th=[13173], 00:14:52.323 | 99.99th=[13960] 00:14:52.323 bw ( KiB/s): min=34568, max=36672, per=99.93%, avg=35906.00, stdev=949.82, samples=4 00:14:52.323 iops : min= 8642, max= 9168, avg=8976.50, stdev=237.46, samples=4 00:14:52.323 lat (msec) : 4=0.12%, 10=98.65%, 20=1.23% 00:14:52.323 cpu : usr=68.13%, sys=23.49%, ctx=7, majf=0, minf=7 00:14:52.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:52.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:52.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:52.323 issued rwts: total=17977,18019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:52.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:52.323 00:14:52.323 Run status group 0 (all jobs): 00:14:52.323 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.2MiB (73.6MB), run=2006-2006msec 00:14:52.323 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2006-2006msec 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:52.323 11:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:52.323 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:52.323 fio-3.35 00:14:52.323 Starting 1 thread 00:14:54.850 00:14:54.850 test: (groupid=0, jobs=1): err= 0: pid=75537: Fri Jul 12 11:38:57 2024 00:14:54.850 read: IOPS=8269, BW=129MiB/s (135MB/s)(259MiB/2007msec) 00:14:54.850 slat (usec): min=3, max=116, avg= 3.73, stdev= 1.61 00:14:54.850 clat (usec): min=2002, max=19228, avg=8484.72, stdev=2586.49 00:14:54.850 lat (usec): min=2006, max=19231, avg=8488.45, stdev=2586.54 00:14:54.850 clat percentiles (usec): 00:14:54.850 | 1.00th=[ 4113], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 6128], 00:14:54.850 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 8225], 60.00th=[ 8848], 00:14:54.850 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11731], 95.00th=[13304], 00:14:54.850 | 99.00th=[15401], 99.50th=[16057], 99.90th=[18744], 99.95th=[19006], 00:14:54.850 | 99.99th=[19268] 00:14:54.850 bw ( KiB/s): min=59616, max=79712, per=52.39%, avg=69312.00, stdev=8960.27, samples=4 00:14:54.850 iops : min= 3726, max= 4982, avg=4332.00, stdev=560.02, samples=4 00:14:54.850 write: IOPS=4963, BW=77.6MiB/s (81.3MB/s)(142MiB/1825msec); 0 zone resets 00:14:54.850 slat (usec): min=35, max=199, avg=38.46, stdev= 5.10 00:14:54.850 clat (usec): min=2650, max=23438, avg=12037.79, stdev=2238.40 00:14:54.850 lat (usec): min=2686, max=23474, avg=12076.25, stdev=2238.92 00:14:54.850 clat percentiles (usec): 00:14:54.850 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:14:54.850 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:14:54.850 | 70.00th=[13042], 80.00th=[14091], 90.00th=[15139], 95.00th=[15926], 00:14:54.850 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19530], 99.95th=[23200], 00:14:54.850 | 99.99th=[23462] 00:14:54.850 bw ( KiB/s): min=61152, max=83168, per=90.51%, avg=71880.00, stdev=9734.01, samples=4 00:14:54.850 iops : min= 3822, max= 5198, avg=4492.50, stdev=608.38, samples=4 00:14:54.850 lat (msec) : 4=0.56%, 10=52.44%, 20=46.98%, 50=0.02% 00:14:54.850 cpu : usr=81.85%, sys=13.76%, ctx=30, majf=0, minf=12 00:14:54.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:54.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:54.850 issued rwts: total=16596,9058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:54.850 00:14:54.850 Run status group 0 (all jobs): 00:14:54.850 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2007-2007msec 00:14:54.850 WRITE: bw=77.6MiB/s (81.3MB/s), 77.6MiB/s-77.6MiB/s (81.3MB/s-81.3MB/s), io=142MiB (148MB), run=1825-1825msec 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.850 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.850 rmmod nvme_tcp 00:14:54.850 rmmod nvme_fabrics 00:14:55.109 rmmod nvme_keyring 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75406 ']' 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75406 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75406 ']' 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75406 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75406 00:14:55.109 killing process with pid 75406 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75406' 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75406 00:14:55.109 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75406 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:55.368 ************************************ 00:14:55.368 END TEST nvmf_fio_host 00:14:55.368 ************************************ 00:14:55.368 00:14:55.368 real 0m8.803s 00:14:55.368 user 0m36.351s 00:14:55.368 sys 0m2.314s 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.368 11:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:55.368 11:38:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:55.368 11:38:58 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:55.368 11:38:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:55.368 11:38:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.368 11:38:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.368 ************************************ 00:14:55.368 START TEST nvmf_failover 00:14:55.368 ************************************ 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:55.368 * Looking for test storage... 00:14:55.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:55.368 Cannot find device "nvmf_tgt_br" 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:55.368 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.626 Cannot find device "nvmf_tgt_br2" 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:55.627 Cannot find device "nvmf_tgt_br" 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:55.627 Cannot find device "nvmf_tgt_br2" 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:55.627 11:38:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:55.627 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:55.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:14:55.886 00:14:55.886 --- 10.0.0.2 ping statistics --- 00:14:55.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.886 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:55.886 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:55.886 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:55.886 00:14:55.886 --- 10.0.0.3 ping statistics --- 00:14:55.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.886 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:55.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:55.886 00:14:55.886 --- 10.0.0.1 ping statistics --- 00:14:55.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.886 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.886 11:38:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:55.887 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75750 00:14:55.887 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75750 00:14:55.887 11:38:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:55.887 11:38:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75750 ']' 00:14:55.887 11:38:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.887 11:38:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.887 11:38:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.887 11:38:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.887 11:38:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:55.887 [2024-07-12 11:38:59.184599] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:14:55.887 [2024-07-12 11:38:59.184693] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.887 [2024-07-12 11:38:59.326680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:56.146 [2024-07-12 11:38:59.439638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.146 [2024-07-12 11:38:59.439688] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.146 [2024-07-12 11:38:59.439699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.146 [2024-07-12 11:38:59.439708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.146 [2024-07-12 11:38:59.439715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.146 [2024-07-12 11:38:59.439925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.146 [2024-07-12 11:38:59.440064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.146 [2024-07-12 11:38:59.440528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.146 [2024-07-12 11:38:59.495403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:56.713 11:39:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.713 11:39:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:56.713 11:39:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.713 11:39:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:56.713 11:39:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:56.713 11:39:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.713 11:39:00 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:56.972 [2024-07-12 11:39:00.386424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.972 11:39:00 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:57.539 Malloc0 00:14:57.539 11:39:00 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:57.798 11:39:00 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:57.798 11:39:01 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.056 [2024-07-12 11:39:01.426593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.056 11:39:01 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:58.313 [2024-07-12 11:39:01.694804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:58.313 11:39:01 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:58.570 [2024-07-12 11:39:01.922984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:58.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75802 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75802 /var/tmp/bdevperf.sock 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75802 ']' 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.570 11:39:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 11:39:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.948 11:39:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:59.948 11:39:02 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:59.948 NVMe0n1 00:14:59.948 11:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:00.219 00:15:00.219 11:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75830 00:15:00.219 11:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.219 11:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:01.155 11:39:04 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.412 11:39:04 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:04.697 11:39:07 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:04.697 00:15:04.698 11:39:08 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:05.265 11:39:08 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:08.627 11:39:11 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.627 [2024-07-12 11:39:11.696037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.627 11:39:11 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:09.563 11:39:12 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:09.563 11:39:12 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75830 00:15:16.179 0 00:15:16.179 11:39:18 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75802 00:15:16.179 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75802 ']' 00:15:16.179 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75802 00:15:16.179 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:16.179 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.179 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75802 00:15:16.180 killing process with pid 75802 00:15:16.180 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:16.180 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:16.180 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75802' 00:15:16.180 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75802 00:15:16.180 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75802 00:15:16.180 11:39:18 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:16.180 [2024-07-12 11:39:01.994609] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:15:16.180 [2024-07-12 11:39:01.994722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75802 ] 00:15:16.180 [2024-07-12 11:39:02.133897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.180 [2024-07-12 11:39:02.255889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.180 [2024-07-12 11:39:02.311763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:16.180 Running I/O for 15 seconds... 00:15:16.180 [2024-07-12 11:39:04.797092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.797592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.797732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.797827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.797903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.797983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.798054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.798134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.798203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.798286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.798356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.798429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.798493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.798565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.798652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.798726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.798801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.798890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.798962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.799039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.799108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.799208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.799281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.799359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.799446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.799527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.799609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.799690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.799761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.799835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.799896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.799980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.800050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.800123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.800197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.800273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.800344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.800420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.800481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.800558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.800657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.800738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.800807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.800868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.800937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.801017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.801093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.801172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.801243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.801313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.801381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.801454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.801523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.801621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.801700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.801779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.801849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.801925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.801994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.802066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.802135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.802212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.802274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.180 [2024-07-12 11:39:04.802348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.802418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.802492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.802553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.802657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.802724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.802799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.802869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.802942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.803025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.180 [2024-07-12 11:39:04.803101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.180 [2024-07-12 11:39:04.803169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.803239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.803301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.803382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.803469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.803550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.803638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.803715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.803784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.803866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.803936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.804019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.804088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.804160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.804228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.804300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.804361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.804437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.804506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.804567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.804658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.804737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.804805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.804893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.804963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.805041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.805103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.805174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.805236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.805304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.805372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.805444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.805515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.805606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.805686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.805760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.805822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.805894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.805968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.181 [2024-07-12 11:39:04.806038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.806111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.181 [2024-07-12 11:39:04.806188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.806261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.181 [2024-07-12 11:39:04.806323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.806383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.181 [2024-07-12 11:39:04.806455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.806525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.181 [2024-07-12 11:39:04.806616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.806709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.181 [2024-07-12 11:39:04.806785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.806855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.181 [2024-07-12 11:39:04.806937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.807009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.181 [2024-07-12 11:39:04.807085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.807155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.807228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.807298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.807376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.807457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.807532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.807651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.807740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.807804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.807879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.807953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.808023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.808093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.808167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.808237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.808310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.808372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.808452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.808515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.808627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.808707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.808778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.808840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.808910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.808973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.809073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.809144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.809223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.809293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.809364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.809426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.181 [2024-07-12 11:39:04.809499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.181 [2024-07-12 11:39:04.809574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.182 [2024-07-12 11:39:04.809670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.809735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.182 [2024-07-12 11:39:04.809806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.809876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.182 [2024-07-12 11:39:04.809954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.810016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.182 [2024-07-12 11:39:04.810090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.810160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.182 [2024-07-12 11:39:04.810233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.810303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.182 [2024-07-12 11:39:04.810374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.810444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.182 [2024-07-12 11:39:04.810528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.810608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.182 [2024-07-12 11:39:04.810687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.810761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.810839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.810916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.810994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.811056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.811126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.811188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.811263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.811334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.811407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.811491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.811567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.811661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.811726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.811801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.811876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.811939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.812015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.812084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.812162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.812232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.812313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.812396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.812466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.812536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.812625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.812700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.812779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.812841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.812919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.812982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.813055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.813117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.813190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.813261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.813334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.813404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.813475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.813537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.813633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.813700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.813780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.813842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.813916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.813987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.182 [2024-07-12 11:39:04.814058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.814127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20967c0 is same with the state(5) to be set 00:15:16.182 [2024-07-12 11:39:04.814225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.182 [2024-07-12 11:39:04.814292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.182 [2024-07-12 11:39:04.814360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67896 len:8 PRP1 0x0 PRP2 0x0 00:15:16.182 [2024-07-12 11:39:04.814420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.814488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.182 [2024-07-12 11:39:04.814554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.182 [2024-07-12 11:39:04.814625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68224 len:8 PRP1 0x0 PRP2 0x0 00:15:16.182 [2024-07-12 11:39:04.814649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.814666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.182 [2024-07-12 11:39:04.814686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.182 [2024-07-12 11:39:04.814696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68232 len:8 PRP1 0x0 PRP2 0x0 00:15:16.182 [2024-07-12 11:39:04.814708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.814722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.182 [2024-07-12 11:39:04.814732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.182 [2024-07-12 11:39:04.814742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68240 len:8 PRP1 0x0 PRP2 0x0 00:15:16.182 [2024-07-12 11:39:04.814754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.814767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.182 [2024-07-12 11:39:04.814777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.182 [2024-07-12 11:39:04.814787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68248 len:8 PRP1 0x0 PRP2 0x0 00:15:16.182 [2024-07-12 11:39:04.814800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.814813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.182 [2024-07-12 11:39:04.814823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.182 [2024-07-12 11:39:04.814833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68256 len:8 PRP1 0x0 PRP2 0x0 00:15:16.182 [2024-07-12 11:39:04.814845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.182 [2024-07-12 11:39:04.814858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.182 [2024-07-12 11:39:04.814868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.182 [2024-07-12 11:39:04.814878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68264 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.814890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.814903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.814913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.814923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68272 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.814946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.814960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.814970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.814980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68280 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.814993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.815017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.815028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68288 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.815041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.815065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.815075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68296 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.815087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.815111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.815121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68304 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.815133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.815156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.815166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68312 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.815178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.815201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.815211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68320 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.815224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.815247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.815257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68328 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.815270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.815293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.815310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68336 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.815323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.183 [2024-07-12 11:39:04.815347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.183 [2024-07-12 11:39:04.815357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68344 len:8 PRP1 0x0 PRP2 0x0 00:15:16.183 [2024-07-12 11:39:04.815370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815441] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20967c0 was disconnected and freed. reset controller. 00:15:16.183 [2024-07-12 11:39:04.815462] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:16.183 [2024-07-12 11:39:04.815522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.183 [2024-07-12 11:39:04.815543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.183 [2024-07-12 11:39:04.815571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.183 [2024-07-12 11:39:04.815618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.183 [2024-07-12 11:39:04.815645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:04.815657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.183 [2024-07-12 11:39:04.815702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045570 (9): Bad file descriptor 00:15:16.183 [2024-07-12 11:39:04.819565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.183 [2024-07-12 11:39:04.857852] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.183 [2024-07-12 11:39:08.421101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.183 [2024-07-12 11:39:08.421173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.421193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.183 [2024-07-12 11:39:08.421207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.421221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.183 [2024-07-12 11:39:08.421234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.421249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.183 [2024-07-12 11:39:08.421291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.421306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045570 is same with the state(5) to be set 00:15:16.183 [2024-07-12 11:39:08.422239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.183 [2024-07-12 11:39:08.422668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.183 [2024-07-12 11:39:08.422681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.422709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.422738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.422766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.422794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.422822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.422851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.422879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.422907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.422936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.422966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.422981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.423001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.423030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.423059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.423088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.423116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.423145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.423173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.423202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.184 [2024-07-12 11:39:08.423697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.184 [2024-07-12 11:39:08.423712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.184 [2024-07-12 11:39:08.423726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.423743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.423765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.423781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.423795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.423810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.423824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.423839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.423862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.423878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.423891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.423906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.423931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.423954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.423968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.423983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.423996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.185 [2024-07-12 11:39:08.424942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.424971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.424986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.425000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.425015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.185 [2024-07-12 11:39:08.425028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.185 [2024-07-12 11:39:08.425043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.186 [2024-07-12 11:39:08.425056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.186 [2024-07-12 11:39:08.425085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.186 [2024-07-12 11:39:08.425113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.186 [2024-07-12 11:39:08.425141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.186 [2024-07-12 11:39:08.425170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.186 [2024-07-12 11:39:08.425623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7d30 is same with the state(5) to be set 00:15:16.186 [2024-07-12 11:39:08.425654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.425664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.425675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.425695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.425720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.425731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78824 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.425743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.425767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.425782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78832 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.425795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.425819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.425829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78840 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.425842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.425865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.425875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78848 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.425889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.425912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.425923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.425935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.425958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.425968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.425981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.425994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.426012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.426022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78872 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.426035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.426048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.426059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.426075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.426088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.426102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.426112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.426122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.426135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.426148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.426158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.426173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.426185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.426199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.426209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.426219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.426232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.426246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.426256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.186 [2024-07-12 11:39:08.426266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:15:16.186 [2024-07-12 11:39:08.426279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.186 [2024-07-12 11:39:08.426293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.186 [2024-07-12 11:39:08.426303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.187 [2024-07-12 11:39:08.426313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:15:16.187 [2024-07-12 11:39:08.426325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:08.426339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.187 [2024-07-12 11:39:08.426349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.187 [2024-07-12 11:39:08.426359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:15:16.187 [2024-07-12 11:39:08.426371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:08.426385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.187 [2024-07-12 11:39:08.426395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.187 [2024-07-12 11:39:08.426405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:15:16.187 [2024-07-12 11:39:08.426418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:08.426431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.187 [2024-07-12 11:39:08.426447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.187 [2024-07-12 11:39:08.426458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:15:16.187 [2024-07-12 11:39:08.426471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:08.426525] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c7d30 was disconnected and freed. reset controller. 00:15:16.187 [2024-07-12 11:39:08.426543] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:16.187 [2024-07-12 11:39:08.426557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.187 [2024-07-12 11:39:08.430378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.187 [2024-07-12 11:39:08.430416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045570 (9): Bad file descriptor 00:15:16.187 [2024-07-12 11:39:08.470570] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.187 [2024-07-12 11:39:12.976746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.976819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.976848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.976865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.976881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.976896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.976911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.976925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.976940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.976959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.976975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.976989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.977018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.977046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.977076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.977131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.977160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.187 [2024-07-12 11:39:12.977189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.187 [2024-07-12 11:39:12.977853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.187 [2024-07-12 11:39:12.977866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.977952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.977968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.977991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.978005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.978339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.978369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.978399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.978429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.978458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.978486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.978515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.188 [2024-07-12 11:39:12.978544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.978984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.978998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.979013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.979027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.188 [2024-07-12 11:39:12.979041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.188 [2024-07-12 11:39:12.979055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.189 [2024-07-12 11:39:12.979850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.979978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.979994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.189 [2024-07-12 11:39:12.980306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6dd0 is same with the state(5) to be set 00:15:16.189 [2024-07-12 11:39:12.980337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.189 [2024-07-12 11:39:12.980348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.189 [2024-07-12 11:39:12.980360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27856 len:8 PRP1 0x0 PRP2 0x0 00:15:16.189 [2024-07-12 11:39:12.980372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.189 [2024-07-12 11:39:12.980387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.189 [2024-07-12 11:39:12.980397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28376 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28384 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28392 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28400 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28408 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28416 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28424 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28432 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27864 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27872 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27880 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27888 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.980963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.980976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.980986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.980996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27896 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.981009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.981023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.981033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.981052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27904 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.981071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.981085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.981095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.981105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27912 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.981118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.981131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.190 [2024-07-12 11:39:12.981143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.190 [2024-07-12 11:39:12.981153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27920 len:8 PRP1 0x0 PRP2 0x0 00:15:16.190 [2024-07-12 11:39:12.981166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.981227] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c6dd0 was disconnected and freed. reset controller. 00:15:16.190 [2024-07-12 11:39:12.981245] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:16.190 [2024-07-12 11:39:12.981309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.190 [2024-07-12 11:39:12.981330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.981344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.190 [2024-07-12 11:39:12.981357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.981371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.190 [2024-07-12 11:39:12.981384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.981398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.190 [2024-07-12 11:39:12.981410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.190 [2024-07-12 11:39:12.981423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.190 [2024-07-12 11:39:12.985336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.190 [2024-07-12 11:39:12.985379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2045570 (9): Bad file descriptor 00:15:16.190 [2024-07-12 11:39:13.020164] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.190 00:15:16.190 Latency(us) 00:15:16.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.190 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:16.190 Verification LBA range: start 0x0 length 0x4000 00:15:16.190 NVMe0n1 : 15.01 9103.09 35.56 233.27 0.00 13677.76 647.91 25618.62 00:15:16.190 =================================================================================================================== 00:15:16.190 Total : 9103.09 35.56 233.27 0.00 13677.76 647.91 25618.62 00:15:16.190 Received shutdown signal, test time was about 15.000000 seconds 00:15:16.190 00:15:16.190 Latency(us) 00:15:16.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.190 =================================================================================================================== 00:15:16.190 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:16.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76003 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76003 /var/tmp/bdevperf.sock 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76003 ']' 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.190 11:39:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:16.758 11:39:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.758 11:39:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:16.758 11:39:19 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:16.758 [2024-07-12 11:39:20.158167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:16.758 11:39:20 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:17.017 [2024-07-12 11:39:20.398440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:17.017 11:39:20 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.584 NVMe0n1 00:15:17.584 11:39:20 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.842 00:15:17.842 11:39:21 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.100 00:15:18.100 11:39:21 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:18.100 11:39:21 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:18.359 11:39:21 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.618 11:39:21 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:21.904 11:39:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:21.904 11:39:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:21.904 11:39:25 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76081 00:15:21.904 11:39:25 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:21.904 11:39:25 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76081 00:15:23.299 0 00:15:23.299 11:39:26 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:23.299 [2024-07-12 11:39:19.009570] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:15:23.299 [2024-07-12 11:39:19.009784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76003 ] 00:15:23.299 [2024-07-12 11:39:19.150599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.299 [2024-07-12 11:39:19.257760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.299 [2024-07-12 11:39:19.310321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:23.299 [2024-07-12 11:39:21.891919] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:23.299 [2024-07-12 11:39:21.892044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.299 [2024-07-12 11:39:21.892069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.299 [2024-07-12 11:39:21.892087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.299 [2024-07-12 11:39:21.892101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.299 [2024-07-12 11:39:21.892114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.299 [2024-07-12 11:39:21.892127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.299 [2024-07-12 11:39:21.892141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.299 [2024-07-12 11:39:21.892154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.299 [2024-07-12 11:39:21.892167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:23.299 [2024-07-12 11:39:21.892215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:23.299 [2024-07-12 11:39:21.892246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d9570 (9): Bad file descriptor 00:15:23.299 [2024-07-12 11:39:21.903912] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:23.299 Running I/O for 1 seconds... 00:15:23.299 00:15:23.299 Latency(us) 00:15:23.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.299 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:23.299 Verification LBA range: start 0x0 length 0x4000 00:15:23.299 NVMe0n1 : 1.01 6316.37 24.67 0.00 0.00 20186.66 2174.60 18588.39 00:15:23.299 =================================================================================================================== 00:15:23.299 Total : 6316.37 24.67 0.00 0.00 20186.66 2174.60 18588.39 00:15:23.299 11:39:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:23.299 11:39:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:23.299 11:39:26 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:23.557 11:39:26 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:23.557 11:39:26 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:23.813 11:39:27 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:24.070 11:39:27 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76003 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76003 ']' 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76003 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76003 00:15:27.347 killing process with pid 76003 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76003' 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76003 00:15:27.347 11:39:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76003 00:15:27.605 11:39:30 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:27.605 11:39:30 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.867 11:39:31 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:27.867 11:39:31 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:27.867 11:39:31 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:27.867 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.867 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:27.867 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.867 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:27.867 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.867 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.867 rmmod nvme_tcp 00:15:27.867 rmmod nvme_fabrics 00:15:28.125 rmmod nvme_keyring 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75750 ']' 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75750 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75750 ']' 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75750 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75750 00:15:28.125 killing process with pid 75750 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75750' 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75750 00:15:28.125 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75750 00:15:28.383 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:28.384 00:15:28.384 real 0m32.990s 00:15:28.384 user 2m7.866s 00:15:28.384 sys 0m5.549s 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.384 11:39:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:28.384 ************************************ 00:15:28.384 END TEST nvmf_failover 00:15:28.384 ************************************ 00:15:28.384 11:39:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.384 11:39:31 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:28.384 11:39:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.384 11:39:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.384 11:39:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.384 ************************************ 00:15:28.384 START TEST nvmf_host_discovery 00:15:28.384 ************************************ 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:28.384 * Looking for test storage... 00:15:28.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.384 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:28.643 Cannot find device "nvmf_tgt_br" 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.643 Cannot find device "nvmf_tgt_br2" 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:28.643 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:28.643 Cannot find device "nvmf_tgt_br" 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:28.644 Cannot find device "nvmf_tgt_br2" 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:28.644 11:39:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:28.644 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:28.902 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:28.902 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:28.902 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:28.902 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.902 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:28.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:28.902 00:15:28.902 --- 10.0.0.2 ping statistics --- 00:15:28.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.902 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:28.902 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:28.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:28.902 00:15:28.902 --- 10.0.0.3 ping statistics --- 00:15:28.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.902 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:28.902 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:28.902 00:15:28.903 --- 10.0.0.1 ping statistics --- 00:15:28.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.903 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76352 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76352 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76352 ']' 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.903 11:39:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.903 [2024-07-12 11:39:32.230094] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:15:28.903 [2024-07-12 11:39:32.230193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.161 [2024-07-12 11:39:32.364605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.161 [2024-07-12 11:39:32.478084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.161 [2024-07-12 11:39:32.478141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.161 [2024-07-12 11:39:32.478154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.161 [2024-07-12 11:39:32.478163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.161 [2024-07-12 11:39:32.478170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.161 [2024-07-12 11:39:32.478202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.161 [2024-07-12 11:39:32.534803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:29.732 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.732 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:29.732 11:39:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.732 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.732 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.989 [2024-07-12 11:39:33.194576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.989 [2024-07-12 11:39:33.202686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.989 null0 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.989 null1 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.989 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.990 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76384 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76384 /tmp/host.sock 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76384 ']' 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.990 11:39:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.990 [2024-07-12 11:39:33.292062] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:15:29.990 [2024-07-12 11:39:33.292402] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76384 ] 00:15:29.990 [2024-07-12 11:39:33.430877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.247 [2024-07-12 11:39:33.560472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.247 [2024-07-12 11:39:33.615682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.183 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.184 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.442 [2024-07-12 11:39:34.635069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:31.442 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:15:31.443 11:39:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:15:32.058 [2024-07-12 11:39:35.297860] bdev_nvme.c:6982:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:32.058 [2024-07-12 11:39:35.297898] bdev_nvme.c:7062:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:32.058 [2024-07-12 11:39:35.297917] bdev_nvme.c:6945:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:32.058 [2024-07-12 11:39:35.303933] bdev_nvme.c:6911:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:32.058 [2024-07-12 11:39:35.361188] bdev_nvme.c:6801:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:32.058 [2024-07-12 11:39:35.361219] bdev_nvme.c:6760:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:32.625 11:39:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:32.625 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:32.884 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.885 [2024-07-12 11:39:36.200549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:32.885 [2024-07-12 11:39:36.201138] bdev_nvme.c:6964:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:32.885 [2024-07-12 11:39:36.201171] bdev_nvme.c:6945:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:32.885 [2024-07-12 11:39:36.207125] bdev_nvme.c:6906:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:32.885 [2024-07-12 11:39:36.271406] bdev_nvme.c:6801:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:32.885 [2024-07-12 11:39:36.271436] bdev_nvme.c:6760:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:32.885 [2024-07-12 11:39:36.271444] bdev_nvme.c:6760:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:32.885 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.145 [2024-07-12 11:39:36.425385] bdev_nvme.c:6964:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:33.145 [2024-07-12 11:39:36.425420] bdev_nvme.c:6945:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:33.145 [2024-07-12 11:39:36.428347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.145 [2024-07-12 11:39:36.428502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.145 [2024-07-12 11:39:36.428523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.145 [2024-07-12 11:39:36.428533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.145 [2024-07-12 11:39:36.428543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.145 [2024-07-12 11:39:36.428552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.145 [2024-07-12 11:39:36.428562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.145 [2024-07-12 11:39:36.428572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.145 [2024-07-12 11:39:36.428592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1612600 is same with the state(5) to be set 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:33.145 [2024-07-12 11:39:36.431686] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:33.145 [2024-07-12 11:39:36.431719] bdev_nvme.c:6760:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:33.145 [2024-07-12 11:39:36.431781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1612600 (9): Bad file descriptor 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.145 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:33.146 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:33.146 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:33.146 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.664 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:33.664 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:33.664 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:33.664 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:33.664 11:39:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:33.664 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.664 11:39:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.599 [2024-07-12 11:39:37.868384] bdev_nvme.c:6982:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:34.599 [2024-07-12 11:39:37.868629] bdev_nvme.c:7062:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:34.599 [2024-07-12 11:39:37.868694] bdev_nvme.c:6945:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:34.599 [2024-07-12 11:39:37.874425] bdev_nvme.c:6911:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:34.599 [2024-07-12 11:39:37.935244] bdev_nvme.c:6801:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:34.599 [2024-07-12 11:39:37.935489] bdev_nvme.c:6760:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.599 request: 00:15:34.599 { 00:15:34.599 "name": "nvme", 00:15:34.599 "trtype": "tcp", 00:15:34.599 "traddr": "10.0.0.2", 00:15:34.599 "adrfam": "ipv4", 00:15:34.599 "trsvcid": "8009", 00:15:34.599 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:34.599 "wait_for_attach": true, 00:15:34.599 "method": "bdev_nvme_start_discovery", 00:15:34.599 "req_id": 1 00:15:34.599 } 00:15:34.599 Got JSON-RPC error response 00:15:34.599 response: 00:15:34.599 { 00:15:34.599 "code": -17, 00:15:34.599 "message": "File exists" 00:15:34.599 } 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:34.599 11:39:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.599 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:34.599 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:34.599 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.599 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.599 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.599 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.599 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:34.599 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:34.599 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.857 request: 00:15:34.857 { 00:15:34.857 "name": "nvme_second", 00:15:34.857 "trtype": "tcp", 00:15:34.857 "traddr": "10.0.0.2", 00:15:34.857 "adrfam": "ipv4", 00:15:34.857 "trsvcid": "8009", 00:15:34.857 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:34.857 "wait_for_attach": true, 00:15:34.857 "method": "bdev_nvme_start_discovery", 00:15:34.857 "req_id": 1 00:15:34.857 } 00:15:34.857 Got JSON-RPC error response 00:15:34.857 response: 00:15:34.857 { 00:15:34.857 "code": -17, 00:15:34.857 "message": "File exists" 00:15:34.857 } 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.857 11:39:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.789 [2024-07-12 11:39:39.204054] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:35.789 [2024-07-12 11:39:39.204132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x162bf80 with addr=10.0.0.2, port=8010 00:15:35.789 [2024-07-12 11:39:39.204161] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:35.789 [2024-07-12 11:39:39.204172] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:35.789 [2024-07-12 11:39:39.204182] bdev_nvme.c:7044:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:37.164 [2024-07-12 11:39:40.204077] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:37.164 [2024-07-12 11:39:40.204170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x162bf80 with addr=10.0.0.2, port=8010 00:15:37.164 [2024-07-12 11:39:40.204198] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:37.164 [2024-07-12 11:39:40.204209] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:37.164 [2024-07-12 11:39:40.204219] bdev_nvme.c:7044:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:38.100 [2024-07-12 11:39:41.203902] bdev_nvme.c:7025:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:38.100 request: 00:15:38.100 { 00:15:38.100 "name": "nvme_second", 00:15:38.100 "trtype": "tcp", 00:15:38.100 "traddr": "10.0.0.2", 00:15:38.100 "adrfam": "ipv4", 00:15:38.100 "trsvcid": "8010", 00:15:38.100 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:38.100 "wait_for_attach": false, 00:15:38.100 "attach_timeout_ms": 3000, 00:15:38.100 "method": "bdev_nvme_start_discovery", 00:15:38.100 "req_id": 1 00:15:38.100 } 00:15:38.100 Got JSON-RPC error response 00:15:38.100 response: 00:15:38.100 { 00:15:38.100 "code": -110, 00:15:38.100 "message": "Connection timed out" 00:15:38.100 } 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76384 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.100 rmmod nvme_tcp 00:15:38.100 rmmod nvme_fabrics 00:15:38.100 rmmod nvme_keyring 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76352 ']' 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76352 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76352 ']' 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76352 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76352 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:38.100 killing process with pid 76352 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76352' 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76352 00:15:38.100 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76352 00:15:38.358 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.358 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.358 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:38.359 00:15:38.359 real 0m9.942s 00:15:38.359 user 0m19.162s 00:15:38.359 sys 0m1.971s 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.359 ************************************ 00:15:38.359 END TEST nvmf_host_discovery 00:15:38.359 ************************************ 00:15:38.359 11:39:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:38.359 11:39:41 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:38.359 11:39:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:38.359 11:39:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.359 11:39:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.359 ************************************ 00:15:38.359 START TEST nvmf_host_multipath_status 00:15:38.359 ************************************ 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:38.359 * Looking for test storage... 00:15:38.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:38.359 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:38.617 Cannot find device "nvmf_tgt_br" 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.617 Cannot find device "nvmf_tgt_br2" 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:38.617 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:38.618 Cannot find device "nvmf_tgt_br" 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:38.618 Cannot find device "nvmf_tgt_br2" 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:38.618 11:39:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:38.618 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:38.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:15:38.876 00:15:38.876 --- 10.0.0.2 ping statistics --- 00:15:38.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.876 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:38.876 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:38.876 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:38.876 00:15:38.876 --- 10.0.0.3 ping statistics --- 00:15:38.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.876 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:38.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:38.876 00:15:38.876 --- 10.0.0.1 ping statistics --- 00:15:38.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.876 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.876 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76830 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76830 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76830 ']' 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.877 11:39:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:38.877 [2024-07-12 11:39:42.235261] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:15:38.877 [2024-07-12 11:39:42.235383] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.136 [2024-07-12 11:39:42.376679] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:39.136 [2024-07-12 11:39:42.512100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.136 [2024-07-12 11:39:42.512160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.136 [2024-07-12 11:39:42.512172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.136 [2024-07-12 11:39:42.512181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.136 [2024-07-12 11:39:42.512189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.136 [2024-07-12 11:39:42.512347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.136 [2024-07-12 11:39:42.512356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.136 [2024-07-12 11:39:42.564902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:40.071 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.071 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:40.071 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.071 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.071 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:40.071 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.071 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76830 00:15:40.071 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:40.071 [2024-07-12 11:39:43.510608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.328 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:40.586 Malloc0 00:15:40.586 11:39:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:40.843 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:41.101 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.358 [2024-07-12 11:39:44.560207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.358 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:41.358 [2024-07-12 11:39:44.792316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:41.625 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:41.625 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76891 00:15:41.625 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:41.625 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76891 /var/tmp/bdevperf.sock 00:15:41.625 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76891 ']' 00:15:41.625 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.625 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.625 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.626 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.626 11:39:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:42.562 11:39:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.562 11:39:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:42.562 11:39:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:42.562 11:39:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:43.128 Nvme0n1 00:15:43.128 11:39:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:43.386 Nvme0n1 00:15:43.386 11:39:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:43.386 11:39:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:45.919 11:39:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:45.919 11:39:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:45.919 11:39:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:45.919 11:39:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:46.915 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:46.916 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:46.916 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:46.916 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:47.175 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.175 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:47.175 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.175 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:47.433 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:47.433 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:47.433 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.433 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:47.693 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.693 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:47.693 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.693 11:39:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:47.951 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.951 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:47.951 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.951 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:48.210 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.210 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:48.210 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.210 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:48.470 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.470 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:48.470 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:48.739 11:39:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:49.000 11:39:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:49.933 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:49.933 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:49.933 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.933 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:50.192 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:50.192 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:50.192 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.192 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:50.451 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.451 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:50.451 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.451 11:39:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:50.709 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.709 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:50.709 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.709 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:50.967 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:50.967 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:50.967 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:50.967 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:51.226 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.226 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:51.226 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.226 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:51.506 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.506 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:51.506 11:39:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:51.772 11:39:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:52.030 11:39:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:52.967 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:52.967 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:52.967 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.967 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:53.226 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:53.226 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:53.226 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.226 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:53.485 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:53.485 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:53.485 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.485 11:39:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:53.743 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:53.743 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:53.743 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:53.743 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:54.002 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.002 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:54.002 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.002 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:54.261 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.261 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:54.261 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.261 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:54.520 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.520 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:54.520 11:39:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:54.779 11:39:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:55.037 11:39:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:55.970 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:55.970 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:55.970 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.970 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:56.227 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.227 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:56.227 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.227 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:56.485 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:56.485 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:56.485 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.485 11:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:56.742 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.742 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:56.742 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.742 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:56.999 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.999 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:56.999 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:56.999 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:57.256 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:57.256 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:57.256 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.256 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:57.514 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:57.514 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:57.514 11:40:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:57.772 11:40:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:58.062 11:40:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:58.997 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:58.997 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:58.997 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.997 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:59.256 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:59.256 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:59.256 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.256 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:59.517 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:59.517 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:59.517 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:59.517 11:40:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.779 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:59.779 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:59.779 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.779 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:00.043 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.043 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:00.043 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.043 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:00.310 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:00.310 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:00.310 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.310 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:00.579 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:00.579 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:00.579 11:40:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:00.850 11:40:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:01.124 11:40:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:02.064 11:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:02.064 11:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:02.064 11:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.064 11:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:02.633 11:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:02.633 11:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:02.633 11:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.633 11:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:02.633 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.633 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:02.633 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.633 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:02.892 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.892 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:02.892 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.892 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:03.150 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:03.150 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:03.150 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.150 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:03.717 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:03.717 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:03.717 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.717 11:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:03.717 11:40:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:03.717 11:40:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:04.284 11:40:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:04.284 11:40:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:04.284 11:40:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:04.543 11:40:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:05.480 11:40:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:05.480 11:40:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:05.480 11:40:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.480 11:40:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:06.046 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.046 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:06.046 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.046 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:06.047 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.047 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:06.047 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.047 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:06.305 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.305 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:06.305 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.305 11:40:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:06.872 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.872 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:06.872 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.872 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:06.872 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.872 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:06.872 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.872 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:07.131 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.131 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:07.131 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:07.389 11:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:07.647 11:40:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:09.021 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:09.021 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:09.021 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:09.021 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.022 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:09.022 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:09.022 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.022 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:09.279 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.279 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:09.279 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:09.279 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.537 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.537 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:09.537 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.537 11:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:09.800 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.800 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:09.800 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:09.800 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.062 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.062 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:10.062 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.062 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:10.320 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.320 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:10.320 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:10.577 11:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:10.835 11:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:11.768 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:11.768 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:11.768 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.768 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:12.026 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.026 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:12.026 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.026 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:12.591 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.591 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:12.591 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.591 11:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:12.850 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.850 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:12.850 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.850 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:13.108 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.108 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:13.108 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.108 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:13.108 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.108 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:13.108 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.108 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:13.675 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.675 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:13.675 11:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:13.675 11:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:13.933 11:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:14.869 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:14.869 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:14.869 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.869 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:15.434 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.434 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:15.434 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.434 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:15.692 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:15.692 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:15.692 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:15.692 11:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.949 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.950 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:15.950 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.950 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:16.208 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.208 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:16.208 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.208 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:16.466 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.466 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:16.466 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.466 11:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76891 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76891 ']' 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76891 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76891 00:16:16.756 killing process with pid 76891 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76891' 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76891 00:16:16.756 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76891 00:16:17.028 Connection closed with partial response: 00:16:17.028 00:16:17.028 00:16:17.028 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76891 00:16:17.028 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:17.028 [2024-07-12 11:39:44.853009] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:16:17.028 [2024-07-12 11:39:44.853110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76891 ] 00:16:17.028 [2024-07-12 11:39:44.983922] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.028 [2024-07-12 11:39:45.094632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.028 [2024-07-12 11:39:45.147095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:17.028 Running I/O for 90 seconds... 00:16:17.028 [2024-07-12 11:40:01.102859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.102931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.102993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.103041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.103078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.103115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.103161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.103197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.103233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.103270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.103306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:17.028 [2024-07-12 11:40:01.103343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.028 [2024-07-12 11:40:01.103380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.103404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.103420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.103441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.103455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.103489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.103507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.103529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.104637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.104749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.104791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.104841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.104874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.104922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.104954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.105031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.105109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.105187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.105264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.105373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.105457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.105554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.105672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.105751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.105829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.105909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.105956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.105988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.106067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.106146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.106225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.106303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.106382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.106484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.106564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.106668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.106757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.106805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.106858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.107080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.107130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.107531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.107604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.107660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.107693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.107740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.107773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.107820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.107851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.107898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.107929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.107976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.108008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.108081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.029 [2024-07-12 11:40:01.108114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.108161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.108192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.108239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.029 [2024-07-12 11:40:01.108270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:17.029 [2024-07-12 11:40:01.108317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.108348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.108395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.108425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.108473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.108504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.108550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.108602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.108653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.108685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.108732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.108763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.108809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.108841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.108888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.108918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.108965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.108996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.109955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.109986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.110080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.110160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.110238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.110317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.110395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.110473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.110551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.110662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.110753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.110861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.110945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.111000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.111052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.111084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.111131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.111189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.111240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.111279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.111328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.111365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.111413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.111451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.111527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.030 [2024-07-12 11:40:01.111572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.111662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.111706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.111755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.111792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:17.030 [2024-07-12 11:40:01.111841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.030 [2024-07-12 11:40:01.111878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.111927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.111964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.112433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.112983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.112997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.113760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:01.113790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.113827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.113857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.113889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.113906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.113936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.113952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.113982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.113997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.114027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.114043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.114073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.114088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.114119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.114134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:01.114182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:01.114202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:17.289102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.031 [2024-07-12 11:40:17.289192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:17.289256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:17.289279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:17.289303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.031 [2024-07-12 11:40:17.289319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:17.031 [2024-07-12 11:40:17.289340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.289355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.289424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.289465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.289501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.289536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.289572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.289629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.289670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.289713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.289749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.289784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.289820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.289856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.289891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.289942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.289964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.289979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.290016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.290053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.290089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.290125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.290161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.290197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.290234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.290270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.290306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.290342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.290388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.290425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.290462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.290498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.290535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.032 [2024-07-12 11:40:17.290572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.290625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.290647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.032 [2024-07-12 11:40:17.290662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:17.032 [2024-07-12 11:40:17.292235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.033 [2024-07-12 11:40:17.292564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.033 [2024-07-12 11:40:17.292621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.033 [2024-07-12 11:40:17.292658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.033 [2024-07-12 11:40:17.292770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.033 [2024-07-12 11:40:17.292806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.033 [2024-07-12 11:40:17.292843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.292985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.292999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:17.033 [2024-07-12 11:40:17.293021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:17.033 [2024-07-12 11:40:17.293036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:17.033 Received shutdown signal, test time was about 33.265093 seconds 00:16:17.033 00:16:17.033 Latency(us) 00:16:17.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.033 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:17.033 Verification LBA range: start 0x0 length 0x4000 00:16:17.033 Nvme0n1 : 33.26 8450.09 33.01 0.00 0.00 15113.09 322.09 4026531.84 00:16:17.033 =================================================================================================================== 00:16:17.033 Total : 8450.09 33.01 0.00 0.00 15113.09 322.09 4026531.84 00:16:17.033 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.292 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:17.292 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:17.292 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:17.292 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:17.292 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:17.292 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:17.292 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:17.292 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:17.292 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:17.292 rmmod nvme_tcp 00:16:17.292 rmmod nvme_fabrics 00:16:17.292 rmmod nvme_keyring 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76830 ']' 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76830 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76830 ']' 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76830 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76830 00:16:17.550 killing process with pid 76830 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76830' 00:16:17.550 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76830 00:16:17.551 11:40:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76830 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:17.809 ************************************ 00:16:17.809 END TEST nvmf_host_multipath_status 00:16:17.809 ************************************ 00:16:17.809 00:16:17.809 real 0m39.343s 00:16:17.809 user 2m6.891s 00:16:17.809 sys 0m11.677s 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.809 11:40:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:17.809 11:40:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:17.809 11:40:21 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:17.809 11:40:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:17.809 11:40:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.809 11:40:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.809 ************************************ 00:16:17.809 START TEST nvmf_discovery_remove_ifc 00:16:17.809 ************************************ 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:17.809 * Looking for test storage... 00:16:17.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.809 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:17.810 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:18.069 Cannot find device "nvmf_tgt_br" 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.069 Cannot find device "nvmf_tgt_br2" 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:18.069 Cannot find device "nvmf_tgt_br" 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:18.069 Cannot find device "nvmf_tgt_br2" 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:18.069 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:18.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:18.328 00:16:18.328 --- 10.0.0.2 ping statistics --- 00:16:18.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.328 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:18.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:18.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:16:18.328 00:16:18.328 --- 10.0.0.3 ping statistics --- 00:16:18.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.328 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:18.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:18.328 00:16:18.328 --- 10.0.0.1 ping statistics --- 00:16:18.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.328 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77677 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77677 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77677 ']' 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.328 11:40:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.328 [2024-07-12 11:40:21.649085] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:16:18.328 [2024-07-12 11:40:21.649352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.587 [2024-07-12 11:40:21.786127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.587 [2024-07-12 11:40:21.902637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.587 [2024-07-12 11:40:21.902921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.587 [2024-07-12 11:40:21.903106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.587 [2024-07-12 11:40:21.903242] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.587 [2024-07-12 11:40:21.903278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.587 [2024-07-12 11:40:21.903389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.587 [2024-07-12 11:40:21.960694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.521 [2024-07-12 11:40:22.663621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.521 [2024-07-12 11:40:22.671717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:19.521 null0 00:16:19.521 [2024-07-12 11:40:22.703661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.521 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77709 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77709 /tmp/host.sock 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77709 ']' 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.521 11:40:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.521 [2024-07-12 11:40:22.790842] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:16:19.521 [2024-07-12 11:40:22.791405] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77709 ] 00:16:19.521 [2024-07-12 11:40:22.930420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.783 [2024-07-12 11:40:23.093715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.349 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.349 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:20.349 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:20.349 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:20.607 [2024-07-12 11:40:23.863309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.607 11:40:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:21.543 [2024-07-12 11:40:24.916837] bdev_nvme.c:6982:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:21.543 [2024-07-12 11:40:24.917098] bdev_nvme.c:7062:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:21.543 [2024-07-12 11:40:24.917133] bdev_nvme.c:6945:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:21.543 [2024-07-12 11:40:24.922890] bdev_nvme.c:6911:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:21.543 [2024-07-12 11:40:24.980211] bdev_nvme.c:7772:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:21.543 [2024-07-12 11:40:24.980291] bdev_nvme.c:7772:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:21.543 [2024-07-12 11:40:24.980320] bdev_nvme.c:7772:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:21.543 [2024-07-12 11:40:24.980341] bdev_nvme.c:6801:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:21.543 [2024-07-12 11:40:24.980369] bdev_nvme.c:6760:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:21.543 11:40:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.543 11:40:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:21.543 11:40:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:21.543 [2024-07-12 11:40:24.985488] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17e6de0 was disconnected and freed. delete nvme_qpair. 00:16:21.543 11:40:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.543 11:40:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:21.543 11:40:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.543 11:40:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:21.543 11:40:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:21.543 11:40:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:21.802 11:40:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:22.738 11:40:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:24.115 11:40:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:25.054 11:40:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:25.990 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.990 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.990 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.990 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.990 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:25.990 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.991 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.991 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.991 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:25.991 11:40:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:26.923 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:26.923 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.923 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:26.923 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.923 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.923 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:26.923 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:26.923 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.181 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:27.181 [2024-07-12 11:40:30.407884] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:27.181 11:40:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:27.181 [2024-07-12 11:40:30.407949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.181 [2024-07-12 11:40:30.407965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.181 [2024-07-12 11:40:30.407979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.181 [2024-07-12 11:40:30.407989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.181 [2024-07-12 11:40:30.408000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.181 [2024-07-12 11:40:30.408010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.181 [2024-07-12 11:40:30.408020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.181 [2024-07-12 11:40:30.408030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.181 [2024-07-12 11:40:30.408040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.181 [2024-07-12 11:40:30.408049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.181 [2024-07-12 11:40:30.408059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174cac0 is same with the state(5) to be set 00:16:27.181 [2024-07-12 11:40:30.417875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174cac0 (9): Bad file descriptor 00:16:27.181 [2024-07-12 11:40:30.427905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.113 [2024-07-12 11:40:31.448709] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:28.113 [2024-07-12 11:40:31.449081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174cac0 with addr=10.0.0.2, port=4420 00:16:28.113 [2024-07-12 11:40:31.449405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174cac0 is same with the state(5) to be set 00:16:28.113 [2024-07-12 11:40:31.449796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174cac0 (9): Bad file descriptor 00:16:28.113 [2024-07-12 11:40:31.450695] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:28.113 [2024-07-12 11:40:31.450774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:28.113 [2024-07-12 11:40:31.450799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:28.113 [2024-07-12 11:40:31.450822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:28.113 [2024-07-12 11:40:31.450888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:28.113 [2024-07-12 11:40:31.450915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:28.113 11:40:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:29.047 [2024-07-12 11:40:32.450991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:29.047 [2024-07-12 11:40:32.451069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:29.047 [2024-07-12 11:40:32.451083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:29.047 [2024-07-12 11:40:32.451093] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:29.047 [2024-07-12 11:40:32.451121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:29.047 [2024-07-12 11:40:32.451152] bdev_nvme.c:6733:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:29.047 [2024-07-12 11:40:32.451217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.047 [2024-07-12 11:40:32.451234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.047 [2024-07-12 11:40:32.451248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.047 [2024-07-12 11:40:32.451259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.047 [2024-07-12 11:40:32.451270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.047 [2024-07-12 11:40:32.451279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.047 [2024-07-12 11:40:32.451290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.047 [2024-07-12 11:40:32.451299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.047 [2024-07-12 11:40:32.451309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:29.047 [2024-07-12 11:40:32.451319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.047 [2024-07-12 11:40:32.451328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:29.047 [2024-07-12 11:40:32.451874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1750860 (9): Bad file descriptor 00:16:29.047 [2024-07-12 11:40:32.452885] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:29.047 [2024-07-12 11:40:32.452905] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:29.047 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.047 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.047 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.047 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.047 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.047 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.047 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.047 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:29.307 11:40:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:30.241 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.241 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.241 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.241 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.242 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.242 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.242 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.242 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.242 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:30.242 11:40:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:31.176 [2024-07-12 11:40:34.461635] bdev_nvme.c:6982:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:31.176 [2024-07-12 11:40:34.461693] bdev_nvme.c:7062:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:31.176 [2024-07-12 11:40:34.461713] bdev_nvme.c:6945:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:31.176 [2024-07-12 11:40:34.467698] bdev_nvme.c:6911:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:31.176 [2024-07-12 11:40:34.524195] bdev_nvme.c:7772:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:31.176 [2024-07-12 11:40:34.524395] bdev_nvme.c:7772:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:31.176 [2024-07-12 11:40:34.524471] bdev_nvme.c:7772:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:31.176 [2024-07-12 11:40:34.524642] bdev_nvme.c:6801:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:31.176 [2024-07-12 11:40:34.524739] bdev_nvme.c:6760:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:31.176 [2024-07-12 11:40:34.530291] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17f3d90 was disconnected and freed. delete nvme_qpair. 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77709 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77709 ']' 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77709 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77709 00:16:31.434 killing process with pid 77709 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77709' 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77709 00:16:31.434 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77709 00:16:31.693 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:31.693 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.693 11:40:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.693 rmmod nvme_tcp 00:16:31.693 rmmod nvme_fabrics 00:16:31.693 rmmod nvme_keyring 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77677 ']' 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77677 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77677 ']' 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77677 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77677 00:16:31.693 killing process with pid 77677 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77677' 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77677 00:16:31.693 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77677 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:31.952 00:16:31.952 real 0m14.274s 00:16:31.952 user 0m24.741s 00:16:31.952 sys 0m2.476s 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:31.952 ************************************ 00:16:31.952 END TEST nvmf_discovery_remove_ifc 00:16:31.952 ************************************ 00:16:31.952 11:40:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 11:40:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:32.210 11:40:35 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:32.211 11:40:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:32.211 11:40:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.211 11:40:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:32.211 ************************************ 00:16:32.211 START TEST nvmf_identify_kernel_target 00:16:32.211 ************************************ 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:32.211 * Looking for test storage... 00:16:32.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:32.211 Cannot find device "nvmf_tgt_br" 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:32.211 Cannot find device "nvmf_tgt_br2" 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:32.211 Cannot find device "nvmf_tgt_br" 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:32.211 Cannot find device "nvmf_tgt_br2" 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:32.211 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:32.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:32.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:32.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:16:32.470 00:16:32.470 --- 10.0.0.2 ping statistics --- 00:16:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.470 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:32.470 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:32.470 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:16:32.470 00:16:32.470 --- 10.0.0.3 ping statistics --- 00:16:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.470 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:32.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:32.470 00:16:32.470 --- 10.0.0.1 ping statistics --- 00:16:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.470 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:32.470 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:32.471 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:32.471 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:32.471 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:32.471 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:32.471 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:32.471 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:32.471 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:32.471 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:32.729 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:32.729 11:40:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:32.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:32.986 Waiting for block devices as requested 00:16:32.986 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:32.986 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:33.244 No valid GPT data, bailing 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:33.244 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:33.245 No valid GPT data, bailing 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:33.245 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:33.245 No valid GPT data, bailing 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:33.503 No valid GPT data, bailing 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:33.503 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -a 10.0.0.1 -t tcp -s 4420 00:16:33.503 00:16:33.503 Discovery Log Number of Records 2, Generation counter 2 00:16:33.503 =====Discovery Log Entry 0====== 00:16:33.503 trtype: tcp 00:16:33.503 adrfam: ipv4 00:16:33.504 subtype: current discovery subsystem 00:16:33.504 treq: not specified, sq flow control disable supported 00:16:33.504 portid: 1 00:16:33.504 trsvcid: 4420 00:16:33.504 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:33.504 traddr: 10.0.0.1 00:16:33.504 eflags: none 00:16:33.504 sectype: none 00:16:33.504 =====Discovery Log Entry 1====== 00:16:33.504 trtype: tcp 00:16:33.504 adrfam: ipv4 00:16:33.504 subtype: nvme subsystem 00:16:33.504 treq: not specified, sq flow control disable supported 00:16:33.504 portid: 1 00:16:33.504 trsvcid: 4420 00:16:33.504 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:33.504 traddr: 10.0.0.1 00:16:33.504 eflags: none 00:16:33.504 sectype: none 00:16:33.504 11:40:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:33.504 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:33.763 ===================================================== 00:16:33.763 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:33.763 ===================================================== 00:16:33.763 Controller Capabilities/Features 00:16:33.763 ================================ 00:16:33.763 Vendor ID: 0000 00:16:33.763 Subsystem Vendor ID: 0000 00:16:33.763 Serial Number: 7f3edfbcc5bdc70a355f 00:16:33.763 Model Number: Linux 00:16:33.763 Firmware Version: 6.7.0-68 00:16:33.763 Recommended Arb Burst: 0 00:16:33.763 IEEE OUI Identifier: 00 00 00 00:16:33.763 Multi-path I/O 00:16:33.763 May have multiple subsystem ports: No 00:16:33.763 May have multiple controllers: No 00:16:33.763 Associated with SR-IOV VF: No 00:16:33.763 Max Data Transfer Size: Unlimited 00:16:33.763 Max Number of Namespaces: 0 00:16:33.763 Max Number of I/O Queues: 1024 00:16:33.763 NVMe Specification Version (VS): 1.3 00:16:33.763 NVMe Specification Version (Identify): 1.3 00:16:33.763 Maximum Queue Entries: 1024 00:16:33.763 Contiguous Queues Required: No 00:16:33.763 Arbitration Mechanisms Supported 00:16:33.763 Weighted Round Robin: Not Supported 00:16:33.763 Vendor Specific: Not Supported 00:16:33.763 Reset Timeout: 7500 ms 00:16:33.763 Doorbell Stride: 4 bytes 00:16:33.763 NVM Subsystem Reset: Not Supported 00:16:33.763 Command Sets Supported 00:16:33.763 NVM Command Set: Supported 00:16:33.763 Boot Partition: Not Supported 00:16:33.763 Memory Page Size Minimum: 4096 bytes 00:16:33.763 Memory Page Size Maximum: 4096 bytes 00:16:33.763 Persistent Memory Region: Not Supported 00:16:33.763 Optional Asynchronous Events Supported 00:16:33.763 Namespace Attribute Notices: Not Supported 00:16:33.763 Firmware Activation Notices: Not Supported 00:16:33.763 ANA Change Notices: Not Supported 00:16:33.763 PLE Aggregate Log Change Notices: Not Supported 00:16:33.763 LBA Status Info Alert Notices: Not Supported 00:16:33.763 EGE Aggregate Log Change Notices: Not Supported 00:16:33.763 Normal NVM Subsystem Shutdown event: Not Supported 00:16:33.763 Zone Descriptor Change Notices: Not Supported 00:16:33.763 Discovery Log Change Notices: Supported 00:16:33.763 Controller Attributes 00:16:33.763 128-bit Host Identifier: Not Supported 00:16:33.763 Non-Operational Permissive Mode: Not Supported 00:16:33.763 NVM Sets: Not Supported 00:16:33.763 Read Recovery Levels: Not Supported 00:16:33.763 Endurance Groups: Not Supported 00:16:33.763 Predictable Latency Mode: Not Supported 00:16:33.763 Traffic Based Keep ALive: Not Supported 00:16:33.763 Namespace Granularity: Not Supported 00:16:33.763 SQ Associations: Not Supported 00:16:33.763 UUID List: Not Supported 00:16:33.763 Multi-Domain Subsystem: Not Supported 00:16:33.763 Fixed Capacity Management: Not Supported 00:16:33.763 Variable Capacity Management: Not Supported 00:16:33.763 Delete Endurance Group: Not Supported 00:16:33.763 Delete NVM Set: Not Supported 00:16:33.763 Extended LBA Formats Supported: Not Supported 00:16:33.763 Flexible Data Placement Supported: Not Supported 00:16:33.763 00:16:33.763 Controller Memory Buffer Support 00:16:33.763 ================================ 00:16:33.763 Supported: No 00:16:33.763 00:16:33.763 Persistent Memory Region Support 00:16:33.763 ================================ 00:16:33.763 Supported: No 00:16:33.763 00:16:33.763 Admin Command Set Attributes 00:16:33.764 ============================ 00:16:33.764 Security Send/Receive: Not Supported 00:16:33.764 Format NVM: Not Supported 00:16:33.764 Firmware Activate/Download: Not Supported 00:16:33.764 Namespace Management: Not Supported 00:16:33.764 Device Self-Test: Not Supported 00:16:33.764 Directives: Not Supported 00:16:33.764 NVMe-MI: Not Supported 00:16:33.764 Virtualization Management: Not Supported 00:16:33.764 Doorbell Buffer Config: Not Supported 00:16:33.764 Get LBA Status Capability: Not Supported 00:16:33.764 Command & Feature Lockdown Capability: Not Supported 00:16:33.764 Abort Command Limit: 1 00:16:33.764 Async Event Request Limit: 1 00:16:33.764 Number of Firmware Slots: N/A 00:16:33.764 Firmware Slot 1 Read-Only: N/A 00:16:33.764 Firmware Activation Without Reset: N/A 00:16:33.764 Multiple Update Detection Support: N/A 00:16:33.764 Firmware Update Granularity: No Information Provided 00:16:33.764 Per-Namespace SMART Log: No 00:16:33.764 Asymmetric Namespace Access Log Page: Not Supported 00:16:33.764 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:33.764 Command Effects Log Page: Not Supported 00:16:33.764 Get Log Page Extended Data: Supported 00:16:33.764 Telemetry Log Pages: Not Supported 00:16:33.764 Persistent Event Log Pages: Not Supported 00:16:33.764 Supported Log Pages Log Page: May Support 00:16:33.764 Commands Supported & Effects Log Page: Not Supported 00:16:33.764 Feature Identifiers & Effects Log Page:May Support 00:16:33.764 NVMe-MI Commands & Effects Log Page: May Support 00:16:33.764 Data Area 4 for Telemetry Log: Not Supported 00:16:33.764 Error Log Page Entries Supported: 1 00:16:33.764 Keep Alive: Not Supported 00:16:33.764 00:16:33.764 NVM Command Set Attributes 00:16:33.764 ========================== 00:16:33.764 Submission Queue Entry Size 00:16:33.764 Max: 1 00:16:33.764 Min: 1 00:16:33.764 Completion Queue Entry Size 00:16:33.764 Max: 1 00:16:33.764 Min: 1 00:16:33.764 Number of Namespaces: 0 00:16:33.764 Compare Command: Not Supported 00:16:33.764 Write Uncorrectable Command: Not Supported 00:16:33.764 Dataset Management Command: Not Supported 00:16:33.764 Write Zeroes Command: Not Supported 00:16:33.764 Set Features Save Field: Not Supported 00:16:33.764 Reservations: Not Supported 00:16:33.764 Timestamp: Not Supported 00:16:33.764 Copy: Not Supported 00:16:33.764 Volatile Write Cache: Not Present 00:16:33.764 Atomic Write Unit (Normal): 1 00:16:33.764 Atomic Write Unit (PFail): 1 00:16:33.764 Atomic Compare & Write Unit: 1 00:16:33.764 Fused Compare & Write: Not Supported 00:16:33.764 Scatter-Gather List 00:16:33.764 SGL Command Set: Supported 00:16:33.764 SGL Keyed: Not Supported 00:16:33.764 SGL Bit Bucket Descriptor: Not Supported 00:16:33.764 SGL Metadata Pointer: Not Supported 00:16:33.764 Oversized SGL: Not Supported 00:16:33.764 SGL Metadata Address: Not Supported 00:16:33.764 SGL Offset: Supported 00:16:33.764 Transport SGL Data Block: Not Supported 00:16:33.764 Replay Protected Memory Block: Not Supported 00:16:33.764 00:16:33.764 Firmware Slot Information 00:16:33.764 ========================= 00:16:33.764 Active slot: 0 00:16:33.764 00:16:33.764 00:16:33.764 Error Log 00:16:33.764 ========= 00:16:33.764 00:16:33.764 Active Namespaces 00:16:33.764 ================= 00:16:33.764 Discovery Log Page 00:16:33.764 ================== 00:16:33.764 Generation Counter: 2 00:16:33.764 Number of Records: 2 00:16:33.764 Record Format: 0 00:16:33.764 00:16:33.764 Discovery Log Entry 0 00:16:33.764 ---------------------- 00:16:33.764 Transport Type: 3 (TCP) 00:16:33.764 Address Family: 1 (IPv4) 00:16:33.764 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:33.764 Entry Flags: 00:16:33.764 Duplicate Returned Information: 0 00:16:33.764 Explicit Persistent Connection Support for Discovery: 0 00:16:33.764 Transport Requirements: 00:16:33.764 Secure Channel: Not Specified 00:16:33.764 Port ID: 1 (0x0001) 00:16:33.764 Controller ID: 65535 (0xffff) 00:16:33.764 Admin Max SQ Size: 32 00:16:33.764 Transport Service Identifier: 4420 00:16:33.764 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:33.764 Transport Address: 10.0.0.1 00:16:33.764 Discovery Log Entry 1 00:16:33.764 ---------------------- 00:16:33.764 Transport Type: 3 (TCP) 00:16:33.764 Address Family: 1 (IPv4) 00:16:33.764 Subsystem Type: 2 (NVM Subsystem) 00:16:33.764 Entry Flags: 00:16:33.764 Duplicate Returned Information: 0 00:16:33.764 Explicit Persistent Connection Support for Discovery: 0 00:16:33.764 Transport Requirements: 00:16:33.764 Secure Channel: Not Specified 00:16:33.764 Port ID: 1 (0x0001) 00:16:33.764 Controller ID: 65535 (0xffff) 00:16:33.764 Admin Max SQ Size: 32 00:16:33.764 Transport Service Identifier: 4420 00:16:33.764 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:33.764 Transport Address: 10.0.0.1 00:16:33.764 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:33.764 get_feature(0x01) failed 00:16:33.764 get_feature(0x02) failed 00:16:33.764 get_feature(0x04) failed 00:16:33.764 ===================================================== 00:16:33.764 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:33.764 ===================================================== 00:16:33.764 Controller Capabilities/Features 00:16:33.764 ================================ 00:16:33.764 Vendor ID: 0000 00:16:33.764 Subsystem Vendor ID: 0000 00:16:33.764 Serial Number: f1666caef5d618b671f0 00:16:33.764 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:33.764 Firmware Version: 6.7.0-68 00:16:33.764 Recommended Arb Burst: 6 00:16:33.764 IEEE OUI Identifier: 00 00 00 00:16:33.764 Multi-path I/O 00:16:33.764 May have multiple subsystem ports: Yes 00:16:33.764 May have multiple controllers: Yes 00:16:33.764 Associated with SR-IOV VF: No 00:16:33.764 Max Data Transfer Size: Unlimited 00:16:33.764 Max Number of Namespaces: 1024 00:16:33.764 Max Number of I/O Queues: 128 00:16:33.764 NVMe Specification Version (VS): 1.3 00:16:33.764 NVMe Specification Version (Identify): 1.3 00:16:33.764 Maximum Queue Entries: 1024 00:16:33.764 Contiguous Queues Required: No 00:16:33.764 Arbitration Mechanisms Supported 00:16:33.764 Weighted Round Robin: Not Supported 00:16:33.764 Vendor Specific: Not Supported 00:16:33.764 Reset Timeout: 7500 ms 00:16:33.764 Doorbell Stride: 4 bytes 00:16:33.764 NVM Subsystem Reset: Not Supported 00:16:33.764 Command Sets Supported 00:16:33.764 NVM Command Set: Supported 00:16:33.764 Boot Partition: Not Supported 00:16:33.764 Memory Page Size Minimum: 4096 bytes 00:16:33.764 Memory Page Size Maximum: 4096 bytes 00:16:33.764 Persistent Memory Region: Not Supported 00:16:33.764 Optional Asynchronous Events Supported 00:16:33.764 Namespace Attribute Notices: Supported 00:16:33.764 Firmware Activation Notices: Not Supported 00:16:33.764 ANA Change Notices: Supported 00:16:33.764 PLE Aggregate Log Change Notices: Not Supported 00:16:33.764 LBA Status Info Alert Notices: Not Supported 00:16:33.764 EGE Aggregate Log Change Notices: Not Supported 00:16:33.764 Normal NVM Subsystem Shutdown event: Not Supported 00:16:33.764 Zone Descriptor Change Notices: Not Supported 00:16:33.764 Discovery Log Change Notices: Not Supported 00:16:33.764 Controller Attributes 00:16:33.764 128-bit Host Identifier: Supported 00:16:33.764 Non-Operational Permissive Mode: Not Supported 00:16:33.764 NVM Sets: Not Supported 00:16:33.764 Read Recovery Levels: Not Supported 00:16:33.764 Endurance Groups: Not Supported 00:16:33.764 Predictable Latency Mode: Not Supported 00:16:33.764 Traffic Based Keep ALive: Supported 00:16:33.764 Namespace Granularity: Not Supported 00:16:33.764 SQ Associations: Not Supported 00:16:33.764 UUID List: Not Supported 00:16:33.764 Multi-Domain Subsystem: Not Supported 00:16:33.764 Fixed Capacity Management: Not Supported 00:16:33.764 Variable Capacity Management: Not Supported 00:16:33.765 Delete Endurance Group: Not Supported 00:16:33.765 Delete NVM Set: Not Supported 00:16:33.765 Extended LBA Formats Supported: Not Supported 00:16:33.765 Flexible Data Placement Supported: Not Supported 00:16:33.765 00:16:33.765 Controller Memory Buffer Support 00:16:33.765 ================================ 00:16:33.765 Supported: No 00:16:33.765 00:16:33.765 Persistent Memory Region Support 00:16:33.765 ================================ 00:16:33.765 Supported: No 00:16:33.765 00:16:33.765 Admin Command Set Attributes 00:16:33.765 ============================ 00:16:33.765 Security Send/Receive: Not Supported 00:16:33.765 Format NVM: Not Supported 00:16:33.765 Firmware Activate/Download: Not Supported 00:16:33.765 Namespace Management: Not Supported 00:16:33.765 Device Self-Test: Not Supported 00:16:33.765 Directives: Not Supported 00:16:33.765 NVMe-MI: Not Supported 00:16:33.765 Virtualization Management: Not Supported 00:16:33.765 Doorbell Buffer Config: Not Supported 00:16:33.765 Get LBA Status Capability: Not Supported 00:16:33.765 Command & Feature Lockdown Capability: Not Supported 00:16:33.765 Abort Command Limit: 4 00:16:33.765 Async Event Request Limit: 4 00:16:33.765 Number of Firmware Slots: N/A 00:16:33.765 Firmware Slot 1 Read-Only: N/A 00:16:33.765 Firmware Activation Without Reset: N/A 00:16:33.765 Multiple Update Detection Support: N/A 00:16:33.765 Firmware Update Granularity: No Information Provided 00:16:33.765 Per-Namespace SMART Log: Yes 00:16:33.765 Asymmetric Namespace Access Log Page: Supported 00:16:33.765 ANA Transition Time : 10 sec 00:16:33.765 00:16:33.765 Asymmetric Namespace Access Capabilities 00:16:33.765 ANA Optimized State : Supported 00:16:33.765 ANA Non-Optimized State : Supported 00:16:33.765 ANA Inaccessible State : Supported 00:16:33.765 ANA Persistent Loss State : Supported 00:16:33.765 ANA Change State : Supported 00:16:33.765 ANAGRPID is not changed : No 00:16:33.765 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:33.765 00:16:33.765 ANA Group Identifier Maximum : 128 00:16:33.765 Number of ANA Group Identifiers : 128 00:16:33.765 Max Number of Allowed Namespaces : 1024 00:16:33.765 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:33.765 Command Effects Log Page: Supported 00:16:33.765 Get Log Page Extended Data: Supported 00:16:33.765 Telemetry Log Pages: Not Supported 00:16:33.765 Persistent Event Log Pages: Not Supported 00:16:33.765 Supported Log Pages Log Page: May Support 00:16:33.765 Commands Supported & Effects Log Page: Not Supported 00:16:33.765 Feature Identifiers & Effects Log Page:May Support 00:16:33.765 NVMe-MI Commands & Effects Log Page: May Support 00:16:33.765 Data Area 4 for Telemetry Log: Not Supported 00:16:33.765 Error Log Page Entries Supported: 128 00:16:33.765 Keep Alive: Supported 00:16:33.765 Keep Alive Granularity: 1000 ms 00:16:33.765 00:16:33.765 NVM Command Set Attributes 00:16:33.765 ========================== 00:16:33.765 Submission Queue Entry Size 00:16:33.765 Max: 64 00:16:33.765 Min: 64 00:16:33.765 Completion Queue Entry Size 00:16:33.765 Max: 16 00:16:33.765 Min: 16 00:16:33.765 Number of Namespaces: 1024 00:16:33.765 Compare Command: Not Supported 00:16:33.765 Write Uncorrectable Command: Not Supported 00:16:33.765 Dataset Management Command: Supported 00:16:33.765 Write Zeroes Command: Supported 00:16:33.765 Set Features Save Field: Not Supported 00:16:33.765 Reservations: Not Supported 00:16:33.765 Timestamp: Not Supported 00:16:33.765 Copy: Not Supported 00:16:33.765 Volatile Write Cache: Present 00:16:33.765 Atomic Write Unit (Normal): 1 00:16:33.765 Atomic Write Unit (PFail): 1 00:16:33.765 Atomic Compare & Write Unit: 1 00:16:33.765 Fused Compare & Write: Not Supported 00:16:33.765 Scatter-Gather List 00:16:33.765 SGL Command Set: Supported 00:16:33.765 SGL Keyed: Not Supported 00:16:33.765 SGL Bit Bucket Descriptor: Not Supported 00:16:33.765 SGL Metadata Pointer: Not Supported 00:16:33.765 Oversized SGL: Not Supported 00:16:33.765 SGL Metadata Address: Not Supported 00:16:33.765 SGL Offset: Supported 00:16:33.765 Transport SGL Data Block: Not Supported 00:16:33.765 Replay Protected Memory Block: Not Supported 00:16:33.765 00:16:33.765 Firmware Slot Information 00:16:33.765 ========================= 00:16:33.765 Active slot: 0 00:16:33.765 00:16:33.765 Asymmetric Namespace Access 00:16:33.765 =========================== 00:16:33.765 Change Count : 0 00:16:33.765 Number of ANA Group Descriptors : 1 00:16:33.765 ANA Group Descriptor : 0 00:16:33.765 ANA Group ID : 1 00:16:33.765 Number of NSID Values : 1 00:16:33.765 Change Count : 0 00:16:33.765 ANA State : 1 00:16:33.765 Namespace Identifier : 1 00:16:33.765 00:16:33.765 Commands Supported and Effects 00:16:33.765 ============================== 00:16:33.765 Admin Commands 00:16:33.765 -------------- 00:16:33.765 Get Log Page (02h): Supported 00:16:33.765 Identify (06h): Supported 00:16:33.765 Abort (08h): Supported 00:16:33.765 Set Features (09h): Supported 00:16:33.765 Get Features (0Ah): Supported 00:16:33.765 Asynchronous Event Request (0Ch): Supported 00:16:33.765 Keep Alive (18h): Supported 00:16:33.765 I/O Commands 00:16:33.765 ------------ 00:16:33.765 Flush (00h): Supported 00:16:33.765 Write (01h): Supported LBA-Change 00:16:33.765 Read (02h): Supported 00:16:33.765 Write Zeroes (08h): Supported LBA-Change 00:16:33.765 Dataset Management (09h): Supported 00:16:33.765 00:16:33.765 Error Log 00:16:33.765 ========= 00:16:33.765 Entry: 0 00:16:33.765 Error Count: 0x3 00:16:33.765 Submission Queue Id: 0x0 00:16:33.765 Command Id: 0x5 00:16:33.765 Phase Bit: 0 00:16:33.765 Status Code: 0x2 00:16:33.765 Status Code Type: 0x0 00:16:33.765 Do Not Retry: 1 00:16:33.765 Error Location: 0x28 00:16:33.765 LBA: 0x0 00:16:33.765 Namespace: 0x0 00:16:33.765 Vendor Log Page: 0x0 00:16:33.765 ----------- 00:16:33.765 Entry: 1 00:16:33.765 Error Count: 0x2 00:16:33.765 Submission Queue Id: 0x0 00:16:33.765 Command Id: 0x5 00:16:33.765 Phase Bit: 0 00:16:33.765 Status Code: 0x2 00:16:33.765 Status Code Type: 0x0 00:16:33.765 Do Not Retry: 1 00:16:33.765 Error Location: 0x28 00:16:33.765 LBA: 0x0 00:16:33.765 Namespace: 0x0 00:16:33.765 Vendor Log Page: 0x0 00:16:33.765 ----------- 00:16:33.765 Entry: 2 00:16:33.765 Error Count: 0x1 00:16:33.765 Submission Queue Id: 0x0 00:16:33.765 Command Id: 0x4 00:16:33.765 Phase Bit: 0 00:16:33.765 Status Code: 0x2 00:16:33.765 Status Code Type: 0x0 00:16:33.765 Do Not Retry: 1 00:16:33.765 Error Location: 0x28 00:16:33.765 LBA: 0x0 00:16:33.765 Namespace: 0x0 00:16:33.765 Vendor Log Page: 0x0 00:16:33.765 00:16:33.765 Number of Queues 00:16:33.765 ================ 00:16:33.765 Number of I/O Submission Queues: 128 00:16:33.765 Number of I/O Completion Queues: 128 00:16:33.765 00:16:33.765 ZNS Specific Controller Data 00:16:33.765 ============================ 00:16:33.765 Zone Append Size Limit: 0 00:16:33.765 00:16:33.765 00:16:33.765 Active Namespaces 00:16:33.765 ================= 00:16:33.765 get_feature(0x05) failed 00:16:33.765 Namespace ID:1 00:16:33.765 Command Set Identifier: NVM (00h) 00:16:33.765 Deallocate: Supported 00:16:33.765 Deallocated/Unwritten Error: Not Supported 00:16:33.765 Deallocated Read Value: Unknown 00:16:33.765 Deallocate in Write Zeroes: Not Supported 00:16:33.765 Deallocated Guard Field: 0xFFFF 00:16:33.765 Flush: Supported 00:16:33.765 Reservation: Not Supported 00:16:33.765 Namespace Sharing Capabilities: Multiple Controllers 00:16:33.765 Size (in LBAs): 1310720 (5GiB) 00:16:33.765 Capacity (in LBAs): 1310720 (5GiB) 00:16:33.765 Utilization (in LBAs): 1310720 (5GiB) 00:16:33.765 UUID: 595c895f-7f37-4361-b956-6db1e4812158 00:16:33.765 Thin Provisioning: Not Supported 00:16:33.765 Per-NS Atomic Units: Yes 00:16:33.765 Atomic Boundary Size (Normal): 0 00:16:33.765 Atomic Boundary Size (PFail): 0 00:16:33.765 Atomic Boundary Offset: 0 00:16:33.765 NGUID/EUI64 Never Reused: No 00:16:33.765 ANA group ID: 1 00:16:33.765 Namespace Write Protected: No 00:16:33.765 Number of LBA Formats: 1 00:16:33.765 Current LBA Format: LBA Format #00 00:16:33.765 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:33.765 00:16:33.765 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:33.765 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.765 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.024 rmmod nvme_tcp 00:16:34.024 rmmod nvme_fabrics 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:34.024 11:40:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:34.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:34.693 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:34.693 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:34.952 ************************************ 00:16:34.952 END TEST nvmf_identify_kernel_target 00:16:34.952 ************************************ 00:16:34.952 00:16:34.952 real 0m2.746s 00:16:34.952 user 0m0.910s 00:16:34.952 sys 0m1.336s 00:16:34.952 11:40:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.952 11:40:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.952 11:40:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:34.952 11:40:38 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:34.952 11:40:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:34.952 11:40:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.952 11:40:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.952 ************************************ 00:16:34.952 START TEST nvmf_auth_host 00:16:34.952 ************************************ 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:34.952 * Looking for test storage... 00:16:34.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.952 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:34.953 Cannot find device "nvmf_tgt_br" 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.953 Cannot find device "nvmf_tgt_br2" 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:34.953 Cannot find device "nvmf_tgt_br" 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:34.953 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:35.211 Cannot find device "nvmf_tgt_br2" 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.211 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:35.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:35.469 00:16:35.469 --- 10.0.0.2 ping statistics --- 00:16:35.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.469 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:35.469 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.469 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:35.469 00:16:35.469 --- 10.0.0.3 ping statistics --- 00:16:35.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.469 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:35.469 00:16:35.469 --- 10.0.0.1 ping statistics --- 00:16:35.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.469 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78582 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78582 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78582 ']' 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.469 11:40:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=31027a0ab018a312a7fbca61e1a71e7d 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.n6C 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 31027a0ab018a312a7fbca61e1a71e7d 0 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 31027a0ab018a312a7fbca61e1a71e7d 0 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=31027a0ab018a312a7fbca61e1a71e7d 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:36.405 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.n6C 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.n6C 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.n6C 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ae4c19c66d0ed55de6c99e430c53f0ff97d72b22cb202ebd508f449702e272b1 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.J0t 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ae4c19c66d0ed55de6c99e430c53f0ff97d72b22cb202ebd508f449702e272b1 3 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ae4c19c66d0ed55de6c99e430c53f0ff97d72b22cb202ebd508f449702e272b1 3 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ae4c19c66d0ed55de6c99e430c53f0ff97d72b22cb202ebd508f449702e272b1 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.J0t 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.J0t 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.J0t 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bf6a58244ac56c54b688fe185b286e2337b0f8e80f2056d7 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2vR 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bf6a58244ac56c54b688fe185b286e2337b0f8e80f2056d7 0 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bf6a58244ac56c54b688fe185b286e2337b0f8e80f2056d7 0 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bf6a58244ac56c54b688fe185b286e2337b0f8e80f2056d7 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:36.664 11:40:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2vR 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2vR 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.2vR 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2ad42955d3b85c1f51fa6bbf9b7bc529f627e72536422733 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jPc 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2ad42955d3b85c1f51fa6bbf9b7bc529f627e72536422733 2 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2ad42955d3b85c1f51fa6bbf9b7bc529f627e72536422733 2 00:16:36.664 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2ad42955d3b85c1f51fa6bbf9b7bc529f627e72536422733 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jPc 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jPc 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jPc 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0a8687a5a7f3942a8432ea850e573359 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SEQ 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0a8687a5a7f3942a8432ea850e573359 1 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0a8687a5a7f3942a8432ea850e573359 1 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0a8687a5a7f3942a8432ea850e573359 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:36.665 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SEQ 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SEQ 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.SEQ 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=325c7e227189dc33a7708344c336093f 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4Dc 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 325c7e227189dc33a7708344c336093f 1 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 325c7e227189dc33a7708344c336093f 1 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=325c7e227189dc33a7708344c336093f 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4Dc 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4Dc 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4Dc 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d9d6d448ef4a1a892cbd12ce3d81b7c0183373e91fac047c 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2k9 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d9d6d448ef4a1a892cbd12ce3d81b7c0183373e91fac047c 2 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d9d6d448ef4a1a892cbd12ce3d81b7c0183373e91fac047c 2 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d9d6d448ef4a1a892cbd12ce3d81b7c0183373e91fac047c 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2k9 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2k9 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.2k9 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb7d8efe5454b2120e34fce1ed7d5d74 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ETV 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb7d8efe5454b2120e34fce1ed7d5d74 0 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb7d8efe5454b2120e34fce1ed7d5d74 0 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb7d8efe5454b2120e34fce1ed7d5d74 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ETV 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ETV 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ETV 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=57c9210355f58aeedd564a1ddb22de2fd9b813af3082a35932728e8f12d38a2b 00:16:36.924 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:36.925 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eds 00:16:36.925 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 57c9210355f58aeedd564a1ddb22de2fd9b813af3082a35932728e8f12d38a2b 3 00:16:36.925 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 57c9210355f58aeedd564a1ddb22de2fd9b813af3082a35932728e8f12d38a2b 3 00:16:36.925 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:36.925 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:36.925 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=57c9210355f58aeedd564a1ddb22de2fd9b813af3082a35932728e8f12d38a2b 00:16:36.925 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:36.925 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eds 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eds 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.eds 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78582 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78582 ']' 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.183 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.n6C 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.J0t ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.J0t 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.2vR 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jPc ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jPc 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.SEQ 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4Dc ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4Dc 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.2k9 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ETV ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ETV 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eds 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.442 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:37.443 11:40:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:37.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:37.702 Waiting for block devices as requested 00:16:37.960 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:37.960 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:38.527 No valid GPT data, bailing 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:38.527 No valid GPT data, bailing 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:38.527 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:38.528 11:40:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:38.786 No valid GPT data, bailing 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:38.786 No valid GPT data, bailing 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -a 10.0.0.1 -t tcp -s 4420 00:16:38.786 00:16:38.786 Discovery Log Number of Records 2, Generation counter 2 00:16:38.786 =====Discovery Log Entry 0====== 00:16:38.786 trtype: tcp 00:16:38.786 adrfam: ipv4 00:16:38.786 subtype: current discovery subsystem 00:16:38.786 treq: not specified, sq flow control disable supported 00:16:38.786 portid: 1 00:16:38.786 trsvcid: 4420 00:16:38.786 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:38.786 traddr: 10.0.0.1 00:16:38.786 eflags: none 00:16:38.786 sectype: none 00:16:38.786 =====Discovery Log Entry 1====== 00:16:38.786 trtype: tcp 00:16:38.786 adrfam: ipv4 00:16:38.786 subtype: nvme subsystem 00:16:38.786 treq: not specified, sq flow control disable supported 00:16:38.786 portid: 1 00:16:38.786 trsvcid: 4420 00:16:38.786 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:38.786 traddr: 10.0.0.1 00:16:38.786 eflags: none 00:16:38.786 sectype: none 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:38.786 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.045 nvme0n1 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.045 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.304 nvme0n1 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.304 nvme0n1 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.304 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.305 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.305 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.305 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.305 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:39.562 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.563 nvme0n1 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.563 11:40:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.821 nvme0n1 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.821 nvme0n1 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.821 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.822 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.822 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.079 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.337 nvme0n1 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.337 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.338 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.632 nvme0n1 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:40.632 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.633 11:40:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.891 nvme0n1 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.891 nvme0n1 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.891 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.150 nvme0n1 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.150 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.151 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:41.151 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.151 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.151 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:41.151 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:41.151 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:41.151 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:41.151 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.151 11:40:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.084 nvme0n1 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:42.084 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.085 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.342 nvme0n1 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.342 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.600 nvme0n1 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.600 11:40:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.600 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.600 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.600 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.600 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.600 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.600 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.600 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:42.600 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.600 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.858 nvme0n1 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.858 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.116 nvme0n1 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.116 11:40:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:45.014 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.015 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 nvme0n1 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.273 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.839 nvme0n1 00:16:45.839 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.839 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.839 11:40:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.839 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.839 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.839 11:40:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.839 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.097 nvme0n1 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.097 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.664 nvme0n1 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.664 11:40:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.922 nvme0n1 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.922 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.855 nvme0n1 00:16:47.855 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.855 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.855 11:40:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.855 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.855 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.855 11:40:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.855 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.855 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.855 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.855 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.855 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.855 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.855 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:47.855 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.856 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.422 nvme0n1 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:48.422 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.423 11:40:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.990 nvme0n1 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:48.990 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.248 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.248 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:49.248 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:49.248 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.248 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.248 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.248 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.249 11:40:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.913 nvme0n1 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.913 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.914 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:49.914 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.914 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.479 nvme0n1 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.479 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.480 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 nvme0n1 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.737 11:40:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.737 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.737 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:50.737 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.737 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 nvme0n1 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.738 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.996 nvme0n1 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:50.996 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.997 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.255 nvme0n1 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.255 nvme0n1 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.255 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.513 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.514 nvme0n1 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.514 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.772 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.772 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.772 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.772 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.772 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.772 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.773 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.773 11:40:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.773 11:40:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.773 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.773 11:40:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.773 nvme0n1 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.773 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.031 nvme0n1 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.032 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.290 nvme0n1 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.290 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.291 nvme0n1 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.291 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.550 nvme0n1 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.550 11:40:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.809 nvme0n1 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.809 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.068 nvme0n1 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.068 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:53.326 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.327 nvme0n1 00:16:53.327 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.585 11:40:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.844 nvme0n1 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.844 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.103 nvme0n1 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.103 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.669 nvme0n1 00:16:54.669 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.670 11:40:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.928 nvme0n1 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.928 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.186 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.444 nvme0n1 00:16:55.444 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.444 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.444 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.444 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.444 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.444 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.444 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.445 11:40:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.010 nvme0n1 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:56.010 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.011 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.577 nvme0n1 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.577 11:40:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.156 nvme0n1 00:16:57.156 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.156 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.156 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.156 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.156 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.461 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.462 11:41:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 nvme0n1 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:58.028 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.029 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.594 nvme0n1 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.594 11:41:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.594 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.159 nvme0n1 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.417 nvme0n1 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.417 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.675 nvme0n1 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.675 11:41:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.675 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.933 nvme0n1 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.933 nvme0n1 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.933 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.191 nvme0n1 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.191 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.450 nvme0n1 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.450 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.707 nvme0n1 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.707 11:41:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.707 nvme0n1 00:17:00.707 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.707 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.707 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.707 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.707 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.707 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.965 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.966 nvme0n1 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.966 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.225 nvme0n1 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.225 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.485 nvme0n1 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.485 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.486 11:41:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.745 nvme0n1 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.745 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.746 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.004 nvme0n1 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.004 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.005 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.264 nvme0n1 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.264 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.523 nvme0n1 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.524 11:41:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.092 nvme0n1 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.092 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.093 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.351 nvme0n1 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.352 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.611 11:41:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.870 nvme0n1 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.870 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.437 nvme0n1 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.437 11:41:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.696 nvme0n1 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzEwMjdhMGFiMDE4YTMxMmE3ZmJjYTYxZTFhNzFlN2RUc4F3: 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: ]] 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWU0YzE5YzY2ZDBlZDU1ZGU2Yzk5ZTQzMGM1M2YwZmY5N2Q3MmIyMmNiMjAyZWJkNTA4ZjQ0OTcwMmUyNzJiMWKZnPY=: 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.696 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.262 nvme0n1 00:17:05.262 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.262 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.262 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.262 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.263 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.263 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.522 11:41:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.089 nvme0n1 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.089 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4Njg3YTVhN2YzOTQyYTg0MzJlYTg1MGU1NzMzNTm9DNPV: 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: ]] 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI1YzdlMjI3MTg5ZGMzM2E3NzA4MzQ0YzMzNjA5M2avvCn+: 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.090 11:41:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.715 nvme0n1 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDlkNmQ0NDhlZjRhMWE4OTJjYmQxMmNlM2Q4MWI3YzAxODMzNzNlOTFmYWMwNDdjuzYjUA==: 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: ]] 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWI3ZDhlZmU1NDU0YjIxMjBlMzRmY2UxZWQ3ZDVkNzTN6t8g: 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.715 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.300 nvme0n1 00:17:07.300 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.300 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.300 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.300 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.300 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.300 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTdjOTIxMDM1NWY1OGFlZWRkNTY0YTFkZGIyMmRlMmZkOWI4MTNhZjMwODJhMzU5MzI3MjhlOGYxMmQzOGEyYu9lEMQ=: 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.559 11:41:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.560 11:41:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.560 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.560 11:41:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.127 nvme0n1 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmY2YTU4MjQ0YWM1NmM1NGI2ODhmZTE4NWIyODZlMjMzN2IwZjhlODBmMjA1NmQ3jUZ7XA==: 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: ]] 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmFkNDI5NTVkM2I4NWMxZjUxZmE2YmJmOWI3YmM1MjlmNjI3ZTcyNTM2NDIyNzMzZPZH2Q==: 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.127 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.128 request: 00:17:08.128 { 00:17:08.128 "name": "nvme0", 00:17:08.128 "trtype": "tcp", 00:17:08.128 "traddr": "10.0.0.1", 00:17:08.128 "adrfam": "ipv4", 00:17:08.128 "trsvcid": "4420", 00:17:08.128 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:08.128 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:08.128 "prchk_reftag": false, 00:17:08.128 "prchk_guard": false, 00:17:08.128 "hdgst": false, 00:17:08.128 "ddgst": false, 00:17:08.128 "method": "bdev_nvme_attach_controller", 00:17:08.128 "req_id": 1 00:17:08.128 } 00:17:08.128 Got JSON-RPC error response 00:17:08.128 response: 00:17:08.128 { 00:17:08.128 "code": -5, 00:17:08.128 "message": "Input/output error" 00:17:08.128 } 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.128 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 request: 00:17:08.387 { 00:17:08.387 "name": "nvme0", 00:17:08.387 "trtype": "tcp", 00:17:08.387 "traddr": "10.0.0.1", 00:17:08.387 "adrfam": "ipv4", 00:17:08.387 "trsvcid": "4420", 00:17:08.387 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:08.387 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:08.387 "prchk_reftag": false, 00:17:08.387 "prchk_guard": false, 00:17:08.387 "hdgst": false, 00:17:08.387 "ddgst": false, 00:17:08.387 "dhchap_key": "key2", 00:17:08.387 "method": "bdev_nvme_attach_controller", 00:17:08.387 "req_id": 1 00:17:08.387 } 00:17:08.387 Got JSON-RPC error response 00:17:08.387 response: 00:17:08.387 { 00:17:08.387 "code": -5, 00:17:08.387 "message": "Input/output error" 00:17:08.387 } 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.387 request: 00:17:08.387 { 00:17:08.387 "name": "nvme0", 00:17:08.387 "trtype": "tcp", 00:17:08.387 "traddr": "10.0.0.1", 00:17:08.387 "adrfam": "ipv4", 00:17:08.387 "trsvcid": "4420", 00:17:08.387 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:08.387 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:08.387 "prchk_reftag": false, 00:17:08.387 "prchk_guard": false, 00:17:08.387 "hdgst": false, 00:17:08.387 "ddgst": false, 00:17:08.387 "dhchap_key": "key1", 00:17:08.387 "dhchap_ctrlr_key": "ckey2", 00:17:08.387 "method": "bdev_nvme_attach_controller", 00:17:08.387 "req_id": 1 00:17:08.387 } 00:17:08.387 Got JSON-RPC error response 00:17:08.387 response: 00:17:08.387 { 00:17:08.387 "code": -5, 00:17:08.387 "message": "Input/output error" 00:17:08.387 } 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:08.387 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.388 rmmod nvme_tcp 00:17:08.388 rmmod nvme_fabrics 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78582 ']' 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78582 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78582 ']' 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78582 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78582 00:17:08.388 killing process with pid 78582 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78582' 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78582 00:17:08.388 11:41:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78582 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:08.955 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:08.956 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:08.956 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:08.956 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:08.956 11:41:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:09.523 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:09.523 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.780 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.780 11:41:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.n6C /tmp/spdk.key-null.2vR /tmp/spdk.key-sha256.SEQ /tmp/spdk.key-sha384.2k9 /tmp/spdk.key-sha512.eds /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:09.780 11:41:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:10.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:10.038 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:10.038 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:10.297 00:17:10.297 real 0m35.290s 00:17:10.297 user 0m31.495s 00:17:10.297 sys 0m3.705s 00:17:10.297 11:41:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.297 11:41:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.297 ************************************ 00:17:10.297 END TEST nvmf_auth_host 00:17:10.297 ************************************ 00:17:10.297 11:41:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:10.297 11:41:13 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:10.297 11:41:13 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:10.297 11:41:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:10.297 11:41:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.297 11:41:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.297 ************************************ 00:17:10.297 START TEST nvmf_digest 00:17:10.297 ************************************ 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:10.297 * Looking for test storage... 00:17:10.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:10.297 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:10.298 Cannot find device "nvmf_tgt_br" 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:10.298 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.556 Cannot find device "nvmf_tgt_br2" 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:10.556 Cannot find device "nvmf_tgt_br" 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:10.556 Cannot find device "nvmf_tgt_br2" 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:10.556 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:10.557 11:41:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:10.557 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:10.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:10.816 00:17:10.816 --- 10.0.0.2 ping statistics --- 00:17:10.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.816 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:10.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:10.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:17:10.816 00:17:10.816 --- 10.0.0.3 ping statistics --- 00:17:10.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.816 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:10.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:10.816 00:17:10.816 --- 10.0.0.1 ping statistics --- 00:17:10.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.816 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:10.816 ************************************ 00:17:10.816 START TEST nvmf_digest_clean 00:17:10.816 ************************************ 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80158 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80158 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80158 ']' 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.816 11:41:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:10.816 [2024-07-12 11:41:14.143783] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:10.816 [2024-07-12 11:41:14.143889] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.076 [2024-07-12 11:41:14.288788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.076 [2024-07-12 11:41:14.449148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.076 [2024-07-12 11:41:14.449226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.076 [2024-07-12 11:41:14.449240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.076 [2024-07-12 11:41:14.449251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.076 [2024-07-12 11:41:14.449261] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.076 [2024-07-12 11:41:14.449294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.011 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.012 [2024-07-12 11:41:15.255001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:12.012 null0 00:17:12.012 [2024-07-12 11:41:15.320250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.012 [2024-07-12 11:41:15.344362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80191 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80191 /var/tmp/bperf.sock 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80191 ']' 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:12.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.012 11:41:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.012 [2024-07-12 11:41:15.401545] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:12.012 [2024-07-12 11:41:15.401665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80191 ] 00:17:12.271 [2024-07-12 11:41:15.536875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.271 [2024-07-12 11:41:15.678796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.207 11:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.207 11:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:13.207 11:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:13.207 11:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:13.207 11:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:13.467 [2024-07-12 11:41:16.662831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:13.467 11:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:13.467 11:41:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:13.726 nvme0n1 00:17:13.726 11:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:13.726 11:41:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:13.726 Running I/O for 2 seconds... 00:17:16.293 00:17:16.293 Latency(us) 00:17:16.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.293 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:16.293 nvme0n1 : 2.00 14389.43 56.21 0.00 0.00 8889.80 2353.34 23950.43 00:17:16.293 =================================================================================================================== 00:17:16.293 Total : 14389.43 56.21 0.00 0.00 8889.80 2353.34 23950.43 00:17:16.293 0 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:16.293 | select(.opcode=="crc32c") 00:17:16.293 | "\(.module_name) \(.executed)"' 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80191 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80191 ']' 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80191 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80191 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:16.293 killing process with pid 80191 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80191' 00:17:16.293 Received shutdown signal, test time was about 2.000000 seconds 00:17:16.293 00:17:16.293 Latency(us) 00:17:16.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.293 =================================================================================================================== 00:17:16.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80191 00:17:16.293 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80191 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80252 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80252 /var/tmp/bperf.sock 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80252 ']' 00:17:16.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.553 11:41:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:16.553 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:16.553 Zero copy mechanism will not be used. 00:17:16.553 [2024-07-12 11:41:19.880044] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:16.553 [2024-07-12 11:41:19.880159] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80252 ] 00:17:16.812 [2024-07-12 11:41:20.018704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.812 [2024-07-12 11:41:20.170775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.748 11:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.748 11:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:17.748 11:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:17.748 11:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:17.748 11:41:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:17.748 [2024-07-12 11:41:21.149814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:18.007 11:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:18.007 11:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:18.278 nvme0n1 00:17:18.278 11:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:18.278 11:41:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:18.537 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:18.537 Zero copy mechanism will not be used. 00:17:18.537 Running I/O for 2 seconds... 00:17:20.499 00:17:20.499 Latency(us) 00:17:20.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.499 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:20.499 nvme0n1 : 2.00 6599.73 824.97 0.00 0.00 2420.74 2219.29 6702.55 00:17:20.499 =================================================================================================================== 00:17:20.499 Total : 6599.73 824.97 0.00 0.00 2420.74 2219.29 6702.55 00:17:20.499 0 00:17:20.499 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:20.499 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:20.499 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:20.499 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:20.499 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:20.499 | select(.opcode=="crc32c") 00:17:20.499 | "\(.module_name) \(.executed)"' 00:17:20.757 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:20.757 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:20.757 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:20.757 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:20.757 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80252 00:17:20.757 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80252 ']' 00:17:20.757 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80252 00:17:20.757 11:41:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:20.757 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.757 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80252 00:17:20.757 killing process with pid 80252 00:17:20.757 Received shutdown signal, test time was about 2.000000 seconds 00:17:20.757 00:17:20.757 Latency(us) 00:17:20.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.757 =================================================================================================================== 00:17:20.757 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.757 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:20.757 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:20.757 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80252' 00:17:20.757 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80252 00:17:20.757 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80252 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80312 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80312 /var/tmp/bperf.sock 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:21.015 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80312 ']' 00:17:21.016 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:21.016 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.016 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:21.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:21.016 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.016 11:41:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:21.016 [2024-07-12 11:41:24.398412] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:21.016 [2024-07-12 11:41:24.398736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80312 ] 00:17:21.274 [2024-07-12 11:41:24.537305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.274 [2024-07-12 11:41:24.648017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.209 11:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.209 11:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:22.209 11:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:22.209 11:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:22.209 11:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:22.467 [2024-07-12 11:41:25.702379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:22.467 11:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:22.467 11:41:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:22.726 nvme0n1 00:17:22.726 11:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:22.726 11:41:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:22.726 Running I/O for 2 seconds... 00:17:25.286 00:17:25.286 Latency(us) 00:17:25.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.286 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.286 nvme0n1 : 2.00 15983.16 62.43 0.00 0.00 8001.53 7119.59 15490.33 00:17:25.286 =================================================================================================================== 00:17:25.286 Total : 15983.16 62.43 0.00 0.00 8001.53 7119.59 15490.33 00:17:25.286 0 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:25.286 | select(.opcode=="crc32c") 00:17:25.286 | "\(.module_name) \(.executed)"' 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80312 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80312 ']' 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80312 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80312 00:17:25.286 killing process with pid 80312 00:17:25.286 Received shutdown signal, test time was about 2.000000 seconds 00:17:25.286 00:17:25.286 Latency(us) 00:17:25.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.286 =================================================================================================================== 00:17:25.286 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80312' 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80312 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80312 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80372 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80372 /var/tmp/bperf.sock 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80372 ']' 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:25.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.286 11:41:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:25.544 [2024-07-12 11:41:28.763480] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:25.544 [2024-07-12 11:41:28.763802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80372 ] 00:17:25.544 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:25.544 Zero copy mechanism will not be used. 00:17:25.544 [2024-07-12 11:41:28.892869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.802 [2024-07-12 11:41:29.001468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.368 11:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.368 11:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:26.368 11:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:26.368 11:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:26.368 11:41:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:26.627 [2024-07-12 11:41:30.022224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:26.627 11:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.627 11:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:27.194 nvme0n1 00:17:27.194 11:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:27.194 11:41:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:27.194 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:27.194 Zero copy mechanism will not be used. 00:17:27.194 Running I/O for 2 seconds... 00:17:29.099 00:17:29.099 Latency(us) 00:17:29.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.099 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:29.099 nvme0n1 : 2.00 6389.54 798.69 0.00 0.00 2498.24 1951.19 11736.90 00:17:29.099 =================================================================================================================== 00:17:29.099 Total : 6389.54 798.69 0.00 0.00 2498.24 1951.19 11736.90 00:17:29.099 0 00:17:29.099 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:29.099 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:29.099 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:29.099 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:29.099 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:29.099 | select(.opcode=="crc32c") 00:17:29.099 | "\(.module_name) \(.executed)"' 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80372 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80372 ']' 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80372 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80372 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:29.667 killing process with pid 80372 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80372' 00:17:29.667 Received shutdown signal, test time was about 2.000000 seconds 00:17:29.667 00:17:29.667 Latency(us) 00:17:29.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.667 =================================================================================================================== 00:17:29.667 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80372 00:17:29.667 11:41:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80372 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80158 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80158 ']' 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80158 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80158 00:17:29.667 killing process with pid 80158 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80158' 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80158 00:17:29.667 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80158 00:17:29.925 ************************************ 00:17:29.925 END TEST nvmf_digest_clean 00:17:29.926 ************************************ 00:17:29.926 00:17:29.926 real 0m19.241s 00:17:29.926 user 0m36.862s 00:17:29.926 sys 0m5.271s 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:29.926 ************************************ 00:17:29.926 START TEST nvmf_digest_error 00:17:29.926 ************************************ 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:29.926 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.184 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80460 00:17:30.184 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80460 00:17:30.184 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80460 ']' 00:17:30.184 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.184 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:30.184 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.184 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.184 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.184 11:41:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.184 [2024-07-12 11:41:33.421613] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:30.184 [2024-07-12 11:41:33.421701] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.184 [2024-07-12 11:41:33.558366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.442 [2024-07-12 11:41:33.669447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.442 [2024-07-12 11:41:33.669727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.442 [2024-07-12 11:41:33.669883] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.442 [2024-07-12 11:41:33.670024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.442 [2024-07-12 11:41:33.670246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.442 [2024-07-12 11:41:33.670323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.008 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.008 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:31.008 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.008 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.008 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.264 [2024-07-12 11:41:34.494956] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.264 [2024-07-12 11:41:34.554902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:31.264 null0 00:17:31.264 [2024-07-12 11:41:34.606129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.264 [2024-07-12 11:41:34.630223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80494 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80494 /var/tmp/bperf.sock 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80494 ']' 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:31.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.264 11:41:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.521 [2024-07-12 11:41:34.718871] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:31.521 [2024-07-12 11:41:34.719164] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80494 ] 00:17:31.521 [2024-07-12 11:41:34.862319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.778 [2024-07-12 11:41:34.979381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.778 [2024-07-12 11:41:35.033188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:32.352 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.352 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:32.352 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:32.352 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:32.609 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:32.609 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.609 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:32.609 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.609 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:32.609 11:41:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:32.866 nvme0n1 00:17:32.866 11:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:32.866 11:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.866 11:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:32.866 11:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.866 11:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:32.866 11:41:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:33.123 Running I/O for 2 seconds... 00:17:33.123 [2024-07-12 11:41:36.411556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.411627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.411653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.123 [2024-07-12 11:41:36.428459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.428501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.428515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.123 [2024-07-12 11:41:36.445352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.445394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.445409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.123 [2024-07-12 11:41:36.462208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.462260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.462284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.123 [2024-07-12 11:41:36.479102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.479144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.479159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.123 [2024-07-12 11:41:36.495960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.496000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.496015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.123 [2024-07-12 11:41:36.512816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.512858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.512872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.123 [2024-07-12 11:41:36.529700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.529739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.529753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.123 [2024-07-12 11:41:36.546567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.546618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.546633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.123 [2024-07-12 11:41:36.563429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.123 [2024-07-12 11:41:36.563471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.123 [2024-07-12 11:41:36.563486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.580318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.580362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.580377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.597169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.597209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.597225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.614028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.614068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.614082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.630910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.630950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.630965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.647721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.647761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.647774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.664569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.664620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.664635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.681380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.681421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.681435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.698183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.698223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.698237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.714984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.715023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.715037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.731845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.731888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.731902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.748723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.748771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.748786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.765573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.765620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.765635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.782447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.782490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.782505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.799333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.799376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.799391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.381 [2024-07-12 11:41:36.816224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.381 [2024-07-12 11:41:36.816269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.381 [2024-07-12 11:41:36.816284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.640 [2024-07-12 11:41:36.833166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.640 [2024-07-12 11:41:36.833208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.640 [2024-07-12 11:41:36.833224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.640 [2024-07-12 11:41:36.850012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.640 [2024-07-12 11:41:36.850051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.640 [2024-07-12 11:41:36.850064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.640 [2024-07-12 11:41:36.866828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.640 [2024-07-12 11:41:36.866868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.640 [2024-07-12 11:41:36.866882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.640 [2024-07-12 11:41:36.883662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.640 [2024-07-12 11:41:36.883702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.640 [2024-07-12 11:41:36.883715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:36.900443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:36.900482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:36.900496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:36.917243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:36.917283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:36.917298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:36.934079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:36.934118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:36.934133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:36.950951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:36.950991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:36.951006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:36.967797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:36.967839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:36.967854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:36.984694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:36.984736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:36.984751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:37.001526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:37.001567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:37.001596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:37.018458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:37.018501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:37.018517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:37.035444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:37.035490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:37.035505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:37.052395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:37.052438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:37.052454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:37.069291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:37.069334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:37.069349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.641 [2024-07-12 11:41:37.086229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.641 [2024-07-12 11:41:37.086272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.641 [2024-07-12 11:41:37.086287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.103098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.103140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.103154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.119963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.120004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.120018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.136791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.136832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.136846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.153622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.153660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.153675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.170442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.170482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.170496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.187323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.187363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.187378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.204170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.204210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.204223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.220976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.221015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.221029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.237856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.237897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.237911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.254705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.254743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.254758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.271499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.271538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.271552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.288370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.288413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.288427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.305320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.305370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.305386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.322268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.322314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.322329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.900 [2024-07-12 11:41:37.339139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:33.900 [2024-07-12 11:41:37.339183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.900 [2024-07-12 11:41:37.339200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.356162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.159 [2024-07-12 11:41:37.356203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.159 [2024-07-12 11:41:37.356217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.372997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.159 [2024-07-12 11:41:37.373039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.159 [2024-07-12 11:41:37.373053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.389879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.159 [2024-07-12 11:41:37.389920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.159 [2024-07-12 11:41:37.389934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.406725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.159 [2024-07-12 11:41:37.406762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.159 [2024-07-12 11:41:37.406775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.423532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.159 [2024-07-12 11:41:37.423567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.159 [2024-07-12 11:41:37.423590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.440346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.159 [2024-07-12 11:41:37.440380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.159 [2024-07-12 11:41:37.440393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.457151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.159 [2024-07-12 11:41:37.457185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.159 [2024-07-12 11:41:37.457197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.481216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.159 [2024-07-12 11:41:37.481253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.159 [2024-07-12 11:41:37.481266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.498060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.159 [2024-07-12 11:41:37.498095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.159 [2024-07-12 11:41:37.498108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.159 [2024-07-12 11:41:37.514865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.160 [2024-07-12 11:41:37.514899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.160 [2024-07-12 11:41:37.514912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.160 [2024-07-12 11:41:37.531728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.160 [2024-07-12 11:41:37.531764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.160 [2024-07-12 11:41:37.531776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.160 [2024-07-12 11:41:37.548506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.160 [2024-07-12 11:41:37.548541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.160 [2024-07-12 11:41:37.548555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.160 [2024-07-12 11:41:37.565328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.160 [2024-07-12 11:41:37.565363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.160 [2024-07-12 11:41:37.565376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.160 [2024-07-12 11:41:37.582118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.160 [2024-07-12 11:41:37.582152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.160 [2024-07-12 11:41:37.582165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.160 [2024-07-12 11:41:37.598897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.160 [2024-07-12 11:41:37.598934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.160 [2024-07-12 11:41:37.598947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.418 [2024-07-12 11:41:37.615721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.418 [2024-07-12 11:41:37.615758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.418 [2024-07-12 11:41:37.615770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.418 [2024-07-12 11:41:37.632547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.418 [2024-07-12 11:41:37.632599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.418 [2024-07-12 11:41:37.632615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.418 [2024-07-12 11:41:37.649428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.418 [2024-07-12 11:41:37.649466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.418 [2024-07-12 11:41:37.649479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.418 [2024-07-12 11:41:37.666252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.418 [2024-07-12 11:41:37.666287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.418 [2024-07-12 11:41:37.666301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.418 [2024-07-12 11:41:37.683072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.683109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.683122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.699875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.699918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.699930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.716755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.716791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.716805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.733552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.733598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.733612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.750338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.750372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.750386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.767116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.767150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.767163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.783917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.783950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.783963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.800695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.800729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.800741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.817989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.818027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.818039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.834801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.834835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.834848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.419 [2024-07-12 11:41:37.851656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.419 [2024-07-12 11:41:37.851692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.419 [2024-07-12 11:41:37.851705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:37.868446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:37.868482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:37.868495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:37.885269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:37.885306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:37.885319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:37.902130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:37.902167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:37.902181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:37.919120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:37.919158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:37.919171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:37.935960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:37.935995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:37.936009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:37.952764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:37.952798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:37.952810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:37.969550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:37.969595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:37.969609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:37.986329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:37.986364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:37.986377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:38.003213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:38.003249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:38.003261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:38.020094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:38.020131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:38.020144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:38.036899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:38.036935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:38.036948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:38.053762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:38.053797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:38.053810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:38.070625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:38.070659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:38.070672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:38.087468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:38.087502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:38.087515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:38.104355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:38.104391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:38.104404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.678 [2024-07-12 11:41:38.121217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.678 [2024-07-12 11:41:38.121251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.678 [2024-07-12 11:41:38.121264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.138024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.138059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.138071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.154848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.154881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.154894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.171791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.171834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.171847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.188637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.188672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.188686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.205505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.205541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.205554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.222368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.222404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.222417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.239223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.239257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.239270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.256063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.256097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.256110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.272907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.272941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.272954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.289679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.289712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.289724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.306454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.306488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.306502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.323280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.323318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.323331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.340139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.340172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.340186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.356951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.356985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.356997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.937 [2024-07-12 11:41:38.373759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:34.937 [2024-07-12 11:41:38.373792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.937 [2024-07-12 11:41:38.373805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.196 [2024-07-12 11:41:38.390138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x231c020) 00:17:35.196 [2024-07-12 11:41:38.390173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.196 [2024-07-12 11:41:38.390186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.196 00:17:35.196 Latency(us) 00:17:35.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.196 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:35.196 nvme0n1 : 2.01 14974.26 58.49 0.00 0.00 8541.98 7923.90 32648.84 00:17:35.196 =================================================================================================================== 00:17:35.196 Total : 14974.26 58.49 0.00 0.00 8541.98 7923.90 32648.84 00:17:35.196 0 00:17:35.196 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:35.196 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:35.196 | .driver_specific 00:17:35.196 | .nvme_error 00:17:35.196 | .status_code 00:17:35.196 | .command_transient_transport_error' 00:17:35.196 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:35.196 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80494 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80494 ']' 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80494 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80494 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:35.454 killing process with pid 80494 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80494' 00:17:35.454 Received shutdown signal, test time was about 2.000000 seconds 00:17:35.454 00:17:35.454 Latency(us) 00:17:35.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.454 =================================================================================================================== 00:17:35.454 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80494 00:17:35.454 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80494 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80550 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80550 /var/tmp/bperf.sock 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80550 ']' 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.713 11:41:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:35.713 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:35.713 Zero copy mechanism will not be used. 00:17:35.713 [2024-07-12 11:41:38.995983] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:35.713 [2024-07-12 11:41:38.996070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80550 ] 00:17:35.713 [2024-07-12 11:41:39.136802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.980 [2024-07-12 11:41:39.263410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.981 [2024-07-12 11:41:39.321924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:36.546 11:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.546 11:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:36.546 11:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:36.546 11:41:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:36.804 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:36.804 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.804 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.804 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.804 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:36.804 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.062 nvme0n1 00:17:37.062 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:37.062 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.062 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.062 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.062 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:37.062 11:41:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:37.322 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:37.322 Zero copy mechanism will not be used. 00:17:37.322 Running I/O for 2 seconds... 00:17:37.322 [2024-07-12 11:41:40.543211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.543265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.543281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.547501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.547536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.547550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.551707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.551744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.551758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.555803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.555840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.555854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.559977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.560011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.560025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.564264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.564301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.564314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.568556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.568601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.568615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.572765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.572798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.572811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.576945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.576978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.576991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.581217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.581252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.581265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.585435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.585470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.585483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.589663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.589696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.589709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.593961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.593997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.594010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.598114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.598149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.598163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.602329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.602364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.602376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.606692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.606727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.606740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.610923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.610958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.610971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.615106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.615159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.615173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.619323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.619358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.619371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.623680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.623715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.623729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.628021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.628055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.628068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.632241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.632275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.632288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.636631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.636666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.636679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.640836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.640870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.640883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.644979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.645013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.645027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.649149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.649183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.649196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.653454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.653489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.653502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.657607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.657642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.657654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.661756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.323 [2024-07-12 11:41:40.661790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.323 [2024-07-12 11:41:40.661802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.323 [2024-07-12 11:41:40.666027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.666063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.666076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.670295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.670329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.670342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.674482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.674517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.674531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.678725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.678758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.678771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.683043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.683081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.683094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.687439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.687475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.687489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.691749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.691784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.691797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.696046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.696083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.696097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.700160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.700195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.700208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.704412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.704448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.704461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.708607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.708637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.708650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.712851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.712889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.712902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.717064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.717101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.717114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.721360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.721398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.721411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.725653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.725691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.725705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.729854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.729889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.729902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.733995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.734031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.734043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.738124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.738158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.738171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.742313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.742347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.742360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.746610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.746644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.746657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.750769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.750803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.750815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.754975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.755009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.755022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.759263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.759298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.759311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.763696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.763731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.763744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.324 [2024-07-12 11:41:40.768013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.324 [2024-07-12 11:41:40.768046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.324 [2024-07-12 11:41:40.768059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.772239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.772273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.772285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.776442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.776476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.776489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.780562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.780606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.780619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.784764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.784798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.784810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.788995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.789029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.789042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.793400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.793434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.793447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.797732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.797766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.797779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.801955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.801988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.802001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.806135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.806170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.806183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.810301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.810335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.810348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.814477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.814512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.814525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.818669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.818702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.818715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.822854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.822908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.822922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.827134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.827168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.827181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.585 [2024-07-12 11:41:40.831341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.585 [2024-07-12 11:41:40.831375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.585 [2024-07-12 11:41:40.831388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.835623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.835665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.835678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.839843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.839876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.839888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.844065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.844098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.844112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.848290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.848324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.848337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.852550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.852597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.852611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.856795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.856829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.856841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.860965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.860998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.861011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.865091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.865125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.865138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.869306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.869339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.869352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.873464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.873499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.873511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.877673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.877706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.877719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.881942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.881976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.881989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.886212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.886247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.886260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.890480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.890513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.890526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.894623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.894655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.894668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.898740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.898773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.898785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.902896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.902930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.902943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.907050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.907085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.907098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.911203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.911237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.911251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.915389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.915423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.915437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.919597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.919631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.919652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.923830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.923864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.923877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.927989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.928022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.928035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.932122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.932156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.932168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.936267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.936301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.936314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.940453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.940488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.940501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.944598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.944631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.944643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.948748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.586 [2024-07-12 11:41:40.948781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.586 [2024-07-12 11:41:40.948794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.586 [2024-07-12 11:41:40.952828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.952861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.952874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.956922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.956955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.956968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.961099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.961133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.961146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.965315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.965349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.965362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.969437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.969471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.969483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.973625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.973657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.973669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.977739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.977773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.977785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.981854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.981888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.981901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.986027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.986061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.986073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.990237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.990272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.990285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.994544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.994592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.994607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:40.998727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:40.998763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:40.998776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:41.002902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:41.002937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:41.002950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:41.007160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:41.007196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:41.007210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:41.011330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:41.011366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:41.011379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:41.015516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:41.015553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:41.015566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:41.019776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:41.019810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:41.019823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:41.023991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:41.024026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:41.024039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.587 [2024-07-12 11:41:41.028161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.587 [2024-07-12 11:41:41.028196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.587 [2024-07-12 11:41:41.028209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.032393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.032427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.032440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.036752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.036786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.036799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.040997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.041031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.041044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.045210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.045244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.045257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.049335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.049370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.049382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.053606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.053650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.053663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.057892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.057927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.057940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.062111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.062146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.062159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.066325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.066359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.066372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.070512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.070547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.070559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.074716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.074749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.074762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.078869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.078914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.078927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.083030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.083063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.083076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.087194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.087228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.087241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.091389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.091423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.091443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.095610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.095650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.095663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.099890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.099925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.847 [2024-07-12 11:41:41.099938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.847 [2024-07-12 11:41:41.104172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.847 [2024-07-12 11:41:41.104206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.104219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.108490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.108526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.108539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.112774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.112810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.112823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.117025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.117059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.117072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.121363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.121397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.121410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.125573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.125619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.125632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.129697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.129731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.129743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.133858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.133891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.133904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.138021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.138055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.138068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.142263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.142297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.142310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.146471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.146505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.146518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.150685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.150718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.150731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.154875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.154909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.154925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.159008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.159047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.159061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.163190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.163224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.163237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.167370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.167405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.167419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.171512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.171546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.171558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.175816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.175850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.175863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.180038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.180071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.180084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.184178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.184211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.184224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.188352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.188386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.188398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.192519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.192552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.192565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.196721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.196753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.196765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.200886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.200920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.200932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.205030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.205063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.205076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.209194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.209230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.209243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.213327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.213361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.213374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.217507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.217541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.217554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.221728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.221762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.848 [2024-07-12 11:41:41.221774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.848 [2024-07-12 11:41:41.225891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.848 [2024-07-12 11:41:41.225925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.225937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.230023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.230057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.230070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.234185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.234218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.234231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.238404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.238439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.238452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.242609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.242642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.242655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.246786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.246820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.246833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.250955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.250989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.251002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.255089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.255124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.255136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.259211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.259245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.259258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.263413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.263446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.263459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.267679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.267711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.267724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.271818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.271851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.271863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.275928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.275962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.275976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.280088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.280122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.280135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.284189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.284222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.284236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.288364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.288398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.288411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.849 [2024-07-12 11:41:41.292547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:37.849 [2024-07-12 11:41:41.292592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.849 [2024-07-12 11:41:41.292606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.296688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.296721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.296734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.300875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.300908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.300921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.305007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.305041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.305054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.309205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.309238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.309252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.313428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.313477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.313490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.317634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.317665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.317678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.321769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.321802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.321815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.325918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.325953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.325966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.330065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.330100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.330113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.334245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.334295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.334308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.338499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.338533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.338546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.342693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.342744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.342758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.346902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.346936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.346949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.351098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.351132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.351145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.355317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.355352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.355366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.359477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.359510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.359523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.363762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.363795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.363808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.368025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.368059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.368073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.372277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.372326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.372340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.376507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.376541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.376554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.380737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.380771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.380783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.384965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.384999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.385012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.389108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.389142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.389154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.393354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.393388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.393401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.397523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.397558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.397571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.401664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.401697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.401709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.405796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.405830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.109 [2024-07-12 11:41:41.405843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.109 [2024-07-12 11:41:41.410002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.109 [2024-07-12 11:41:41.410037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.410049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.414247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.414282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.414296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.418436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.418471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.418484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.422618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.422652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.422665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.426820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.426854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.426867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.430897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.430931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.430944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.435070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.435104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.435117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.439219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.439253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.439266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.443394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.443428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.443441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.447628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.447672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.447693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.451805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.451839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.451852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.455939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.455977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.455990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.460054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.460088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.460101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.464233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.464264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.464278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.468384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.468418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.468432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.472622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.472656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.472669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.476793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.476826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.476839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.481024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.481059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.481072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.485175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.485209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.485222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.489403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.489453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.489466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.493567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.493641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.493654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.497808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.497857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.497870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.501980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.502043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.502056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.506178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.506226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.506239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.510292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.510340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.510353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.514509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.514558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.514572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.518684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.518731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.518744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.523026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.523061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.523073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.527176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.527224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.110 [2024-07-12 11:41:41.527237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.110 [2024-07-12 11:41:41.531274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.110 [2024-07-12 11:41:41.531323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.111 [2024-07-12 11:41:41.531335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.111 [2024-07-12 11:41:41.535471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.111 [2024-07-12 11:41:41.535519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.111 [2024-07-12 11:41:41.535532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.111 [2024-07-12 11:41:41.539688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.111 [2024-07-12 11:41:41.539721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.111 [2024-07-12 11:41:41.539734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.111 [2024-07-12 11:41:41.543818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.111 [2024-07-12 11:41:41.543851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.111 [2024-07-12 11:41:41.543864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.111 [2024-07-12 11:41:41.548028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.111 [2024-07-12 11:41:41.548077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.111 [2024-07-12 11:41:41.548105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.111 [2024-07-12 11:41:41.552292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.111 [2024-07-12 11:41:41.552339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.111 [2024-07-12 11:41:41.552352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.370 [2024-07-12 11:41:41.556570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.370 [2024-07-12 11:41:41.556626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.370 [2024-07-12 11:41:41.556645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.370 [2024-07-12 11:41:41.560813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.370 [2024-07-12 11:41:41.560844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.370 [2024-07-12 11:41:41.560873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.370 [2024-07-12 11:41:41.565162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.370 [2024-07-12 11:41:41.565212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.370 [2024-07-12 11:41:41.565225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.370 [2024-07-12 11:41:41.569420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.370 [2024-07-12 11:41:41.569470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.370 [2024-07-12 11:41:41.569483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.370 [2024-07-12 11:41:41.573534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.370 [2024-07-12 11:41:41.573583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.370 [2024-07-12 11:41:41.573622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.370 [2024-07-12 11:41:41.577807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.370 [2024-07-12 11:41:41.577840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.370 [2024-07-12 11:41:41.577853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.370 [2024-07-12 11:41:41.582050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.370 [2024-07-12 11:41:41.582099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.370 [2024-07-12 11:41:41.582111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.370 [2024-07-12 11:41:41.586370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.370 [2024-07-12 11:41:41.586405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.586417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.590687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.590720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.590733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.594978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.595011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.595024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.599294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.599343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.599356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.603411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.603461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.603474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.607722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.607755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.607768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.611935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.611969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.611982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.616057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.616106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.616134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.620259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.620307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.620319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.624525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.624577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.624590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.628578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.628637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.628650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.632679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.632727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.632740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.636691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.636737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.636750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.640825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.640873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.640886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.644992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.645040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.645054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.649112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.649161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.649175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.653280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.653343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.653356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.657820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.657870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.657884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.662062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.662111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.662123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.666356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.666406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.666419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.670705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.371 [2024-07-12 11:41:41.670754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.371 [2024-07-12 11:41:41.670766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.371 [2024-07-12 11:41:41.674960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.675008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.675021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.679072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.679121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.679133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.683153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.683201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.683213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.687267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.687316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.687329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.691413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.691462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.691475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.695533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.695581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.695603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.699738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.699771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.699785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.703847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.703880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.703893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.707968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.708032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.708045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.712268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.712319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.712332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.716497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.716546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.716559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.720680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.720727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.720741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.724831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.724864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.724876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.729014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.729048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.729060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.733287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.733322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.733335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.737462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.737497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.737510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.741647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.741683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.741696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.745827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.745864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.745878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.750051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.750086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.750099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.754212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.754245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.372 [2024-07-12 11:41:41.754259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.372 [2024-07-12 11:41:41.758363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.372 [2024-07-12 11:41:41.758397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.758409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.762610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.762643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.762656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.766749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.766781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.766794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.770894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.770929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.770942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.775161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.775196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.775210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.779420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.779454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.779467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.783705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.783737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.783751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.788031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.788064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.788077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.792509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.792545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.792562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.796842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.796878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.796892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.801074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.801112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.801126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.805166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.805202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.805215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.809369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.809405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.809418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.373 [2024-07-12 11:41:41.813492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.373 [2024-07-12 11:41:41.813527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.373 [2024-07-12 11:41:41.813541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.817654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.817693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.817706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.821772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.821805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.821818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.825903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.825938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.825951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.830102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.830137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.830151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.834255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.834289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.834301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.838375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.838408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.838421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.842493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.842528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.842541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.846601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.846634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.846647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.850737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.850770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.850782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.854866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.854900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.854913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.858988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.859022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.859035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.863127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.863182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.863197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.867383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.867418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.633 [2024-07-12 11:41:41.867431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.633 [2024-07-12 11:41:41.871522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.633 [2024-07-12 11:41:41.871556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.871568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.875631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.875672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.875685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.879819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.879854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.879867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.883878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.883912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.883925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.887969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.888003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.888016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.892082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.892115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.892127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.896261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.896296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.896309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.900419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.900453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.900466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.904497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.904531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.904544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.908622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.908654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.908668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.913178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.913209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.913222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.917372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.917408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.917421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.921631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.921664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.921678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.925769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.925803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.925816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.929995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.930030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.930043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.934215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.934251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.934264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.938438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.938474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.938487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.942628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.942662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.942675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.946836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.946870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.946883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.950954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.950987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.951000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.955143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.955178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.955191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.959274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.959309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.959322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.963391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.963425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.963438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.967548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.967596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.967611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.971738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.971771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.971783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.975928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.975963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.975975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.980060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.980094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.980107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.984188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.984223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.984236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.988363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.634 [2024-07-12 11:41:41.988397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.634 [2024-07-12 11:41:41.988410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.634 [2024-07-12 11:41:41.992556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:41.992600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:41.992613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:41.996696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:41.996729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:41.996742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.000928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.000962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.000975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.005123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.005157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.005170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.009276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.009310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.009323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.013397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.013431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.013444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.017521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.017555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.017568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.021654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.021687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.021700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.025754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.025786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.025799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.029834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.029868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.029881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.034044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.034077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.034090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.038301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.038335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.038349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.042507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.042542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.042554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.046662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.046695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.046708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.050910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.050943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.050956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.055055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.055090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.055104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.059228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.059265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.059278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.063507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.063546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.063560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.067732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.067771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.067784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.072021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.072061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.072075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.635 [2024-07-12 11:41:42.076328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.635 [2024-07-12 11:41:42.076367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.635 [2024-07-12 11:41:42.076380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.080542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.080586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.080601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.084702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.084734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.084747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.088832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.088865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.088878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.092986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.093020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.093032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.097116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.097149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.097162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.101261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.101295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.101308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.105501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.105535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.105548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.109663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.109696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.109709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.113795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.113828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.113841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.117984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.906 [2024-07-12 11:41:42.118018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.906 [2024-07-12 11:41:42.118031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.906 [2024-07-12 11:41:42.122203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.122237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.122250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.126471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.126507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.126521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.130802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.130836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.130850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.134993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.135028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.135041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.139209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.139243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.139255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.143418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.143452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.143465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.147700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.147733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.147746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.151809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.151841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.151855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.155967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.155999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.156012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.160131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.160198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.160211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.164309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.164342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.164355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.168498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.168531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.168544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.172771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.172804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.172816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.177189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.177255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.177268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.181438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.181472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.181484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.185784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.185817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.185830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.190042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.190076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.190089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.194206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.194240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.194252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.198382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.198432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.198445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.202646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.202681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.202693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.206759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.206792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.206804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.210934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.210983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.210996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.215207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.215256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.215285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.219478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.219527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.219540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.223742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.223776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.223790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.227860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.227894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.227907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.232143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.232182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.232201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.236464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.907 [2024-07-12 11:41:42.236498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.907 [2024-07-12 11:41:42.236511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.907 [2024-07-12 11:41:42.240676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.240709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.240722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.244969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.245004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.245018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.249091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.249126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.249139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.253171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.253205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.253219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.257327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.257363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.257377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.261618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.261655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.261668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.265786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.265819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.265832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.269952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.269984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.269997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.274153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.274187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.274201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.278265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.278298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.278311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.282480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.282514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.282527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.286743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.286776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.286788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.290800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.290833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.290847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.294946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.294980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.294992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.299044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.299078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.299091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.303174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.303208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.303220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.307317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.307352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.307365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.311514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.311550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.311563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.315671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.315705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.315717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.319863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.319899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.319912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.324050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.324083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.324096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.328147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.328180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.328192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.332258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.332291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.332304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.336352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.336385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.336399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.340493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.340526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.340539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.344564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.344606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.344619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:38.908 [2024-07-12 11:41:42.348655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:38.908 [2024-07-12 11:41:42.348695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:38.908 [2024-07-12 11:41:42.348707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.352803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.352836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.352849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.356960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.356994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.357007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.361057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.361090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.361103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.365200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.365234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.365246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.369355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.369389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.369402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.373456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.373494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.373508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.377599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.377634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.377647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.381746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.381779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.381792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.385947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.385980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.385993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.390121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.390155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.390168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.394236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.394268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.394281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.398369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.398404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.398417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.402535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.402569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.402596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.406703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.406736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.406749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.410931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.410965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.410978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.415203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.415238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.415252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.419309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.419342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.419355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.423466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.423500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.423513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.427614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.427655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.427669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.431811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.431846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.431858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.435924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.170 [2024-07-12 11:41:42.435959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.170 [2024-07-12 11:41:42.435972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.170 [2024-07-12 11:41:42.440062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.440097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.440109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.444228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.444262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.444275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.448436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.448470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.448483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.452608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.452641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.452654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.456801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.456834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.456847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.460946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.460994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.461007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.465085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.465118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.465131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.469231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.469264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.469277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.473324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.473357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.473370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.477488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.477523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.477536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.481626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.481659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.481672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.485813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.485847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.485859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.489952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.489986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.489999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.494071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.494105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.494117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.498231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.498266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.498278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.502480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.502514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.502527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.506830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.506864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.506877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.510993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.511027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.511040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.515195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.515229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.515241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.519441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.519475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.519488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.523619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.523661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.523674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.527885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.527921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.527934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.532086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.532120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.532144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:39.171 [2024-07-12 11:41:42.536255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8adac0) 00:17:39.171 [2024-07-12 11:41:42.536290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:39.171 [2024-07-12 11:41:42.536303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:39.171 00:17:39.171 Latency(us) 00:17:39.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.171 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:39.171 nvme0n1 : 2.00 7371.68 921.46 0.00 0.00 2166.65 1899.05 6464.23 00:17:39.171 =================================================================================================================== 00:17:39.171 Total : 7371.68 921.46 0.00 0.00 2166.65 1899.05 6464.23 00:17:39.171 0 00:17:39.171 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:39.171 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:39.171 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:39.171 | .driver_specific 00:17:39.171 | .nvme_error 00:17:39.171 | .status_code 00:17:39.171 | .command_transient_transport_error' 00:17:39.171 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 476 > 0 )) 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80550 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80550 ']' 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80550 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80550 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:39.448 killing process with pid 80550 00:17:39.448 Received shutdown signal, test time was about 2.000000 seconds 00:17:39.448 00:17:39.448 Latency(us) 00:17:39.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.448 =================================================================================================================== 00:17:39.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80550' 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80550 00:17:39.448 11:41:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80550 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80609 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80609 /var/tmp/bperf.sock 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80609 ']' 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.725 11:41:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.725 [2024-07-12 11:41:43.146112] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:39.725 [2024-07-12 11:41:43.146204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80609 ] 00:17:39.982 [2024-07-12 11:41:43.289061] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.982 [2024-07-12 11:41:43.398367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.240 [2024-07-12 11:41:43.451253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:40.806 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.806 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:40.806 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:40.806 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:41.067 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:41.067 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.067 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.067 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.067 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.067 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.326 nvme0n1 00:17:41.326 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:41.326 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.326 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.326 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.326 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:41.326 11:41:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:41.326 Running I/O for 2 seconds... 00:17:41.585 [2024-07-12 11:41:44.776795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fef90 00:17:41.585 [2024-07-12 11:41:44.779388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.585 [2024-07-12 11:41:44.779428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.585 [2024-07-12 11:41:44.792838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190feb58 00:17:41.585 [2024-07-12 11:41:44.795334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.585 [2024-07-12 11:41:44.795368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.808646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fe2e8 00:17:41.586 [2024-07-12 11:41:44.811121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.811151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.824516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fda78 00:17:41.586 [2024-07-12 11:41:44.827000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.827032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.840672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fd208 00:17:41.586 [2024-07-12 11:41:44.843185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.843225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.856920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fc998 00:17:41.586 [2024-07-12 11:41:44.859397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.859436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.873173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fc128 00:17:41.586 [2024-07-12 11:41:44.875664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.875703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.889521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fb8b8 00:17:41.586 [2024-07-12 11:41:44.891983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.892025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.905871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fb048 00:17:41.586 [2024-07-12 11:41:44.908350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.908404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.922240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fa7d8 00:17:41.586 [2024-07-12 11:41:44.924670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.924719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.938491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f9f68 00:17:41.586 [2024-07-12 11:41:44.940861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.940896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.954323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f96f8 00:17:41.586 [2024-07-12 11:41:44.956651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.956680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.970104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f8e88 00:17:41.586 [2024-07-12 11:41:44.972399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.972428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:44.985903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f8618 00:17:41.586 [2024-07-12 11:41:44.988184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:44.988217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:45.001714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f7da8 00:17:41.586 [2024-07-12 11:41:45.003972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:45.004002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:41.586 [2024-07-12 11:41:45.017595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f7538 00:17:41.586 [2024-07-12 11:41:45.019821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.586 [2024-07-12 11:41:45.019851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:41.846 [2024-07-12 11:41:45.033323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f6cc8 00:17:41.846 [2024-07-12 11:41:45.035525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.846 [2024-07-12 11:41:45.035555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.846 [2024-07-12 11:41:45.049149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f6458 00:17:41.846 [2024-07-12 11:41:45.051335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.846 [2024-07-12 11:41:45.051364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:41.846 [2024-07-12 11:41:45.065108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f5be8 00:17:41.846 [2024-07-12 11:41:45.067272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.846 [2024-07-12 11:41:45.067302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:41.846 [2024-07-12 11:41:45.080889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f5378 00:17:41.846 [2024-07-12 11:41:45.083046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.846 [2024-07-12 11:41:45.083078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:41.846 [2024-07-12 11:41:45.096819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f4b08 00:17:41.846 [2024-07-12 11:41:45.098966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.846 [2024-07-12 11:41:45.098998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:41.846 [2024-07-12 11:41:45.112659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f4298 00:17:41.846 [2024-07-12 11:41:45.114789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.846 [2024-07-12 11:41:45.114820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:41.846 [2024-07-12 11:41:45.128620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f3a28 00:17:41.846 [2024-07-12 11:41:45.130746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.846 [2024-07-12 11:41:45.130779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:41.846 [2024-07-12 11:41:45.144492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f31b8 00:17:41.846 [2024-07-12 11:41:45.146567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.846 [2024-07-12 11:41:45.146605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:41.846 [2024-07-12 11:41:45.160315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f2948 00:17:41.846 [2024-07-12 11:41:45.162362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.846 [2024-07-12 11:41:45.162391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:41.847 [2024-07-12 11:41:45.176272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f20d8 00:17:41.847 [2024-07-12 11:41:45.178322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.847 [2024-07-12 11:41:45.178356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:41.847 [2024-07-12 11:41:45.192326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f1868 00:17:41.847 [2024-07-12 11:41:45.194349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.847 [2024-07-12 11:41:45.194392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:41.847 [2024-07-12 11:41:45.208239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f0ff8 00:17:41.847 [2024-07-12 11:41:45.210251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.847 [2024-07-12 11:41:45.210282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:41.847 [2024-07-12 11:41:45.224096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f0788 00:17:41.847 [2024-07-12 11:41:45.226078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.847 [2024-07-12 11:41:45.226110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:41.847 [2024-07-12 11:41:45.239943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190eff18 00:17:41.847 [2024-07-12 11:41:45.241916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.847 [2024-07-12 11:41:45.241956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:41.847 [2024-07-12 11:41:45.255843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ef6a8 00:17:41.847 [2024-07-12 11:41:45.257788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.847 [2024-07-12 11:41:45.257818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:41.847 [2024-07-12 11:41:45.271572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190eee38 00:17:41.847 [2024-07-12 11:41:45.273495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.847 [2024-07-12 11:41:45.273524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:41.847 [2024-07-12 11:41:45.287419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ee5c8 00:17:41.847 [2024-07-12 11:41:45.289361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.847 [2024-07-12 11:41:45.289390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.105 [2024-07-12 11:41:45.303360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190edd58 00:17:42.105 [2024-07-12 11:41:45.305249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.105 [2024-07-12 11:41:45.305278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:42.105 [2024-07-12 11:41:45.319188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ed4e8 00:17:42.105 [2024-07-12 11:41:45.321056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.321085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.334915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ecc78 00:17:42.106 [2024-07-12 11:41:45.336770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.336801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.350737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ec408 00:17:42.106 [2024-07-12 11:41:45.352554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.352594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.366600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ebb98 00:17:42.106 [2024-07-12 11:41:45.368430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.368464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.382628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190eb328 00:17:42.106 [2024-07-12 11:41:45.384469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.384504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.398557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190eaab8 00:17:42.106 [2024-07-12 11:41:45.400363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.400396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.414441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ea248 00:17:42.106 [2024-07-12 11:41:45.416207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.416238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.430276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e99d8 00:17:42.106 [2024-07-12 11:41:45.432032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.432062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.446175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e9168 00:17:42.106 [2024-07-12 11:41:45.447920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.447952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.462099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e88f8 00:17:42.106 [2024-07-12 11:41:45.463807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.463839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.477923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e8088 00:17:42.106 [2024-07-12 11:41:45.479606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.479638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.493870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e7818 00:17:42.106 [2024-07-12 11:41:45.495527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.495552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.509801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e6fa8 00:17:42.106 [2024-07-12 11:41:45.511459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.511494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.525709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e6738 00:17:42.106 [2024-07-12 11:41:45.527325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.527357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:42.106 [2024-07-12 11:41:45.541562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e5ec8 00:17:42.106 [2024-07-12 11:41:45.543162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.106 [2024-07-12 11:41:45.543193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.557408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e5658 00:17:42.365 [2024-07-12 11:41:45.558986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.559017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.573199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e4de8 00:17:42.365 [2024-07-12 11:41:45.574756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.574787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.589049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e4578 00:17:42.365 [2024-07-12 11:41:45.590598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.590629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.604866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e3d08 00:17:42.365 [2024-07-12 11:41:45.606367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.606397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.620697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e3498 00:17:42.365 [2024-07-12 11:41:45.622184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.622217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.636471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e2c28 00:17:42.365 [2024-07-12 11:41:45.637952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.637984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.652422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e23b8 00:17:42.365 [2024-07-12 11:41:45.653930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.653964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.668595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e1b48 00:17:42.365 [2024-07-12 11:41:45.670088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.670122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.684746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e12d8 00:17:42.365 [2024-07-12 11:41:45.686205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.686241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.700830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e0a68 00:17:42.365 [2024-07-12 11:41:45.702259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.702294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.716881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e01f8 00:17:42.365 [2024-07-12 11:41:45.718291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.718326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.733298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190df988 00:17:42.365 [2024-07-12 11:41:45.734726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.734761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.749542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190df118 00:17:42.365 [2024-07-12 11:41:45.750930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.750964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.765664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190de8a8 00:17:42.365 [2024-07-12 11:41:45.767024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.767058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.781709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190de038 00:17:42.365 [2024-07-12 11:41:45.783038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.783072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:42.365 [2024-07-12 11:41:45.804421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190de038 00:17:42.365 [2024-07-12 11:41:45.806994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.365 [2024-07-12 11:41:45.807028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:42.625 [2024-07-12 11:41:45.820542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190de8a8 00:17:42.626 [2024-07-12 11:41:45.823098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.823136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.836787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190df118 00:17:42.626 [2024-07-12 11:41:45.839335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.839372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.853049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190df988 00:17:42.626 [2024-07-12 11:41:45.855549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.855595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.869144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e01f8 00:17:42.626 [2024-07-12 11:41:45.871629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.871673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.885262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e0a68 00:17:42.626 [2024-07-12 11:41:45.887748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.887784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.901301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e12d8 00:17:42.626 [2024-07-12 11:41:45.903731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.903763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.917243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e1b48 00:17:42.626 [2024-07-12 11:41:45.919635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.919675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.933051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e23b8 00:17:42.626 [2024-07-12 11:41:45.935392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.935424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.948817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e2c28 00:17:42.626 [2024-07-12 11:41:45.951163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.951194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.964615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e3498 00:17:42.626 [2024-07-12 11:41:45.966928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.966957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.980377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e3d08 00:17:42.626 [2024-07-12 11:41:45.982681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.982709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:45.996120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e4578 00:17:42.626 [2024-07-12 11:41:45.998387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:45.998416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:46.011906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e4de8 00:17:42.626 [2024-07-12 11:41:46.014172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:46.014201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:46.027698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e5658 00:17:42.626 [2024-07-12 11:41:46.029940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:46.029970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:46.043478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e5ec8 00:17:42.626 [2024-07-12 11:41:46.045732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:46.045764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:42.626 [2024-07-12 11:41:46.059246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e6738 00:17:42.626 [2024-07-12 11:41:46.061447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.626 [2024-07-12 11:41:46.061476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:42.891 [2024-07-12 11:41:46.075038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e6fa8 00:17:42.891 [2024-07-12 11:41:46.077256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.891 [2024-07-12 11:41:46.077288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:42.891 [2024-07-12 11:41:46.090914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e7818 00:17:42.891 [2024-07-12 11:41:46.093156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.891 [2024-07-12 11:41:46.093187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:42.891 [2024-07-12 11:41:46.106770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e8088 00:17:42.892 [2024-07-12 11:41:46.108921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.108952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.122557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e88f8 00:17:42.892 [2024-07-12 11:41:46.124701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.124731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.138491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e9168 00:17:42.892 [2024-07-12 11:41:46.140662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.140695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.154422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190e99d8 00:17:42.892 [2024-07-12 11:41:46.156554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.156591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.170402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ea248 00:17:42.892 [2024-07-12 11:41:46.172482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.172511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.186229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190eaab8 00:17:42.892 [2024-07-12 11:41:46.188300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.188329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.202122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190eb328 00:17:42.892 [2024-07-12 11:41:46.204177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.204210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.218067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ebb98 00:17:42.892 [2024-07-12 11:41:46.220087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.220119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.233906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ec408 00:17:42.892 [2024-07-12 11:41:46.235903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.235936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.249760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ecc78 00:17:42.892 [2024-07-12 11:41:46.251766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.251798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.265514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ed4e8 00:17:42.892 [2024-07-12 11:41:46.267463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.267494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.281352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190edd58 00:17:42.892 [2024-07-12 11:41:46.283294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.283324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.297271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ee5c8 00:17:42.892 [2024-07-12 11:41:46.299231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.299267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.313357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190eee38 00:17:42.892 [2024-07-12 11:41:46.315284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.315317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:42.892 [2024-07-12 11:41:46.329263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190ef6a8 00:17:42.892 [2024-07-12 11:41:46.331154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:42.892 [2024-07-12 11:41:46.331185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.345148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190eff18 00:17:43.151 [2024-07-12 11:41:46.347009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.347037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.360940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f0788 00:17:43.151 [2024-07-12 11:41:46.362781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.362813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.376856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f0ff8 00:17:43.151 [2024-07-12 11:41:46.378683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.378722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.392884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f1868 00:17:43.151 [2024-07-12 11:41:46.394686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.394722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.408762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f20d8 00:17:43.151 [2024-07-12 11:41:46.410543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.410584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.424602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f2948 00:17:43.151 [2024-07-12 11:41:46.426352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.426385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.440403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f31b8 00:17:43.151 [2024-07-12 11:41:46.442145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.442174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.456179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f3a28 00:17:43.151 [2024-07-12 11:41:46.457896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.457925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.472018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f4298 00:17:43.151 [2024-07-12 11:41:46.473714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.473744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.487769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f4b08 00:17:43.151 [2024-07-12 11:41:46.489436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.489466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.503519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f5378 00:17:43.151 [2024-07-12 11:41:46.505190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.505221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.519426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f5be8 00:17:43.151 [2024-07-12 11:41:46.521089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.521117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.535238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f6458 00:17:43.151 [2024-07-12 11:41:46.536878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.151 [2024-07-12 11:41:46.536906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:43.151 [2024-07-12 11:41:46.551043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f6cc8 00:17:43.152 [2024-07-12 11:41:46.552670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.152 [2024-07-12 11:41:46.552702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.152 [2024-07-12 11:41:46.566887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f7538 00:17:43.152 [2024-07-12 11:41:46.568485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.152 [2024-07-12 11:41:46.568517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:43.152 [2024-07-12 11:41:46.582757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f7da8 00:17:43.152 [2024-07-12 11:41:46.584337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.152 [2024-07-12 11:41:46.584367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:43.152 [2024-07-12 11:41:46.598700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f8618 00:17:43.413 [2024-07-12 11:41:46.600261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.600294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.614521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f8e88 00:17:43.413 [2024-07-12 11:41:46.616072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.616102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.630354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f96f8 00:17:43.413 [2024-07-12 11:41:46.631870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.631901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.646310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190f9f68 00:17:43.413 [2024-07-12 11:41:46.647850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.647886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.662641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fa7d8 00:17:43.413 [2024-07-12 11:41:46.664198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.664230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.679093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fb048 00:17:43.413 [2024-07-12 11:41:46.680630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.680665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.695529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fb8b8 00:17:43.413 [2024-07-12 11:41:46.697036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.697073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.712018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fc128 00:17:43.413 [2024-07-12 11:41:46.713487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.713525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.728419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fc998 00:17:43.413 [2024-07-12 11:41:46.729881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.729916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.744883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fd208 00:17:43.413 [2024-07-12 11:41:46.746312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.746350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:43.413 [2024-07-12 11:41:46.761395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c360) with pdu=0x2000190fda78 00:17:43.413 [2024-07-12 11:41:46.762808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:43.413 [2024-07-12 11:41:46.762845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:43.413 00:17:43.413 Latency(us) 00:17:43.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.413 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.413 nvme0n1 : 2.01 15872.76 62.00 0.00 0.00 8055.51 2457.60 30742.34 00:17:43.413 =================================================================================================================== 00:17:43.413 Total : 15872.76 62.00 0.00 0.00 8055.51 2457.60 30742.34 00:17:43.413 0 00:17:43.413 11:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:43.413 11:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:43.413 | .driver_specific 00:17:43.413 | .nvme_error 00:17:43.413 | .status_code 00:17:43.413 | .command_transient_transport_error' 00:17:43.413 11:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:43.413 11:41:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 125 > 0 )) 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80609 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80609 ']' 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80609 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80609 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:43.672 killing process with pid 80609 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80609' 00:17:43.672 Received shutdown signal, test time was about 2.000000 seconds 00:17:43.672 00:17:43.672 Latency(us) 00:17:43.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.672 =================================================================================================================== 00:17:43.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80609 00:17:43.672 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80609 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80672 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80672 /var/tmp/bperf.sock 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80672 ']' 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.931 11:41:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:43.931 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:43.931 Zero copy mechanism will not be used. 00:17:43.931 [2024-07-12 11:41:47.316768] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:43.931 [2024-07-12 11:41:47.316843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80672 ] 00:17:44.189 [2024-07-12 11:41:47.446030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.189 [2024-07-12 11:41:47.560884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.189 [2024-07-12 11:41:47.614402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:45.124 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:45.691 nvme0n1 00:17:45.691 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:45.691 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.691 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:45.691 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.691 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:45.691 11:41:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:45.691 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:45.691 Zero copy mechanism will not be used. 00:17:45.691 Running I/O for 2 seconds... 00:17:45.691 [2024-07-12 11:41:48.970017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:48.970328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:48.970365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:48.975301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:48.975610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:48.975640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:48.980515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:48.980828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:48.980866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:48.985727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:48.986022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:48.986052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:48.990915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:48.991207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:48.991246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:48.996102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:48.996407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:48.996445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.001278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.001570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.001611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.006483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.006787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.006823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.011745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.012047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.012083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.016963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.017255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.017288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.022150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.022443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.022477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.027389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.027704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.027728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.032552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.032865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.032894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.037777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.038069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.038098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.042998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.043297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.043327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.048238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.048529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.048558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.053408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.053712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.053740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.058641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.058933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.058962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.063852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.064151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.064178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.069029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.691 [2024-07-12 11:41:49.069319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.691 [2024-07-12 11:41:49.069348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.691 [2024-07-12 11:41:49.074208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.074507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.074536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.079438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.079754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.079782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.084646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.084937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.084965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.089885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.090176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.090204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.095130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.095431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.095459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.100360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.100674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.100702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.105561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.105863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.105892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.110777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.111068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.111096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.115931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.116225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.116254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.121150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.121454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.121483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.126355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.126661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.126690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.131536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.131850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.131878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.692 [2024-07-12 11:41:49.136759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.692 [2024-07-12 11:41:49.137050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.692 [2024-07-12 11:41:49.137078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.141942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.142236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.142264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.147230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.147534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.147563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.152475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.152783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.152810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.157672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.157963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.157991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.162923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.163213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.163241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.168158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.168450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.168478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.173361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.173666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.173694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.178586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.178889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.178916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.183801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.184108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.184135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.189012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.189303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.189331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.194213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.194503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.194531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.199448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.199760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.199797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.204713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.205018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.205046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.209926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.210218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.210246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.215120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.215420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.215449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.220358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.220666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.220693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.225591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.225881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.225909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.230758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.231061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.231089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.236004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.236311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.952 [2024-07-12 11:41:49.236339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.952 [2024-07-12 11:41:49.241290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.952 [2024-07-12 11:41:49.241595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.241622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.246489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.246796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.246819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.251728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.252031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.252059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.256945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.257237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.257265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.262178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.262480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.262509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.267423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.267737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.267766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.272650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.272938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.272965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.277831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.278126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.278153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.283050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.283342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.283370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.288318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.288623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.288647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.293605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.293902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.293930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.298851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.299146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.299174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.304129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.304418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.304446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.309379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.309688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.309710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.314559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.314868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.314895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.319811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.320114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.320142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.325025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.325317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.325339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.330248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.330540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.330568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.335427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.335739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.335768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.340695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.340989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.341027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.345889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.346180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.346208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.351163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.351458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.351486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.356449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.356755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.356777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.361606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.361905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.361933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.366810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.367109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.367136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.372086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.372379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.372407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.377315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.377620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.377647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.382521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.382828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.382856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.387786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.388080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.953 [2024-07-12 11:41:49.388108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.953 [2024-07-12 11:41:49.393004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.953 [2024-07-12 11:41:49.393309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.954 [2024-07-12 11:41:49.393336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.954 [2024-07-12 11:41:49.398243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:45.954 [2024-07-12 11:41:49.398532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.954 [2024-07-12 11:41:49.398560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.403477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.403790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.403814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.408652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.408938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.408966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.413861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.414151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.414178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.419078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.419371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.419399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.424272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.424571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.424610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.429504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.429809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.429837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.434665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.434959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.434986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.439921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.440211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.440238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.445134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.445431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.445459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.450365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.450669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.450692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.455548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.455874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.455904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.460798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.461091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.461120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.466008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.466299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.466322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.471210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.471500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.471522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.476433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.476737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.476778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.481688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.481981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.482008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.486864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.487154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.487182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.492069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.492358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.492385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.497259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.497553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.497593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.502465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.502770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.502798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.507612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.507916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.507944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.512798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.513091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.513130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.518026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.213 [2024-07-12 11:41:49.518326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.213 [2024-07-12 11:41:49.518354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.213 [2024-07-12 11:41:49.523236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.523531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.523559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.528512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.528815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.528842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.533767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.534059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.534086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.539028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.539321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.539349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.544239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.544528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.544556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.549466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.549774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.549796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.554630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.554929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.554956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.559996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.560286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.560314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.565211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.565501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.565529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.570527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.570831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.570859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.575778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.576070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.576099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.581019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.581310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.581338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.586204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.586498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.586526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.591380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.591698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.591721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.596620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.596917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.596945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.601875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.602169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.602197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.607067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.607362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.607390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.612362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.612667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.612695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.617590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.617879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.617906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.622794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.623088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.623115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.628038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.628331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.628359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.633190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.633482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.633510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.638432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.638736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.638758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.643639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.643945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.643972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.648863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.649155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.649183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.654088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.654379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.654406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.214 [2024-07-12 11:41:49.659332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.214 [2024-07-12 11:41:49.659636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.214 [2024-07-12 11:41:49.659668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.664539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.664849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.664878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.669762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.670052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.670080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.674961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.675252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.675280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.680199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.680490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.680517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.685420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.685725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.685752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.690643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.690933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.690961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.695819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.696112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.696139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.701090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.701384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.701412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.706294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.706598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.706621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.711454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.711778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.711807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.716607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.716898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.716925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.721762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.722056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.722083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.727009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.727301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.727330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.732252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.732542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.732570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.737450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.737755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.737783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.742662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.742949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.742977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.747943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.748234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.748262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.753164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.753454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.474 [2024-07-12 11:41:49.753483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.474 [2024-07-12 11:41:49.758397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.474 [2024-07-12 11:41:49.758701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.758731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.763599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.763915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.763936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.768744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.769035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.769065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.774016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.774308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.774336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.779224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.779515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.779555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.784432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.784746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.784773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.789667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.789962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.789989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.794914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.795207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.795235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.800144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.800437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.800464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.805347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.805650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.805678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.810558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.810862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.810884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.815830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.816140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.816168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.821159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.821453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.821481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.826472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.826791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.826820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.831719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.832011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.832038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.836995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.837290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.837318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.842219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.842509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.842538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.847453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.847779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.847807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.852887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.853185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.853212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.858086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.858379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.858407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.863321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.863623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.863651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.868540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.868844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.868872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.873778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.874079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.874108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.878985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.879285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.879313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.884202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.884503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.475 [2024-07-12 11:41:49.884531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.475 [2024-07-12 11:41:49.889396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.475 [2024-07-12 11:41:49.889703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.476 [2024-07-12 11:41:49.889731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.476 [2024-07-12 11:41:49.894561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.476 [2024-07-12 11:41:49.894863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.476 [2024-07-12 11:41:49.894891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.476 [2024-07-12 11:41:49.899812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.476 [2024-07-12 11:41:49.900125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.476 [2024-07-12 11:41:49.900153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.476 [2024-07-12 11:41:49.905048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.476 [2024-07-12 11:41:49.905340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.476 [2024-07-12 11:41:49.905369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.476 [2024-07-12 11:41:49.910215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.476 [2024-07-12 11:41:49.910505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.476 [2024-07-12 11:41:49.910533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.476 [2024-07-12 11:41:49.915401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.476 [2024-07-12 11:41:49.915711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.476 [2024-07-12 11:41:49.915739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.476 [2024-07-12 11:41:49.920629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.476 [2024-07-12 11:41:49.920921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.476 [2024-07-12 11:41:49.920948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.735 [2024-07-12 11:41:49.925753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.926043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.926071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.930915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.931209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.931237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.936125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.936414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.936442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.941340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.941652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.941674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.946534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.946843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.946871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.951693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.951983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.952011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.956856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.957168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.957195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.962085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.962379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.962407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.967309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.967613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.967640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.972514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.972819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.972842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.977822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.978120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.978149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.983094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.983386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.983414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.988333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.988635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.988663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.993590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.993883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.993910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:49.998820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:49.999113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:49.999140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.003987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.004284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.004312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.009182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.009472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.009500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.014403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.014709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.014737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.019643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.019946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.019975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.024821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.025114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.025141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.030012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.030306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.030329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.035254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.035549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.035591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.040453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.040757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.040780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.045651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.045942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.045970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.050876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.051170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.051197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.056052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.056346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.056374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.061251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.061547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.061587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.066476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.066780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.736 [2024-07-12 11:41:50.066807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.736 [2024-07-12 11:41:50.071758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.736 [2024-07-12 11:41:50.072049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.072077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.076929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.077222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.077249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.082168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.082461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.082507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.087344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.087652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.087687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.092547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.092853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.092881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.097719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.098020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.098047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.103430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.103756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.103783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.108665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.108959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.108986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.113880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.114184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.114212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.119074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.119373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.119401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.124356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.124658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.124686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.129539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.129846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.129874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.134779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.135070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.135098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.140051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.140350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.140381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.145328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.145640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.145665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.150596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.150892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.150922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.155850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.156149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.156178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.161094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.161387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.161415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.166340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.166647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.166670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.171521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.171849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.171877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.176789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.177082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.177110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.737 [2024-07-12 11:41:50.182027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.737 [2024-07-12 11:41:50.182319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.737 [2024-07-12 11:41:50.182341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.187285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.187592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.187618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.192463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.192769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.192797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.197725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.198020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.198048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.202874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.203169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.203198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.208101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.208396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.208426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.213319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.213628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.213657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.218572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.218900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.218930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.223830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.224133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.224162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.229103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.229395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.229423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.234351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.234675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.234705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.239569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.239892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.239920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.244814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.245105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.245127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.250050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.250350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.250392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.255298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.255605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.255634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.260554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.260858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.260885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.265787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.266095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.266123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.271030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.271324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.271352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.276321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.276629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.276652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.281539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.281855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.281883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.286800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.287094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.287132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.292072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.292367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.292396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.297293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.297597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.297620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.302486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.302789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.302817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.307734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.308027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.308054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.312989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.997 [2024-07-12 11:41:50.313292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-07-12 11:41:50.313320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.997 [2024-07-12 11:41:50.318155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.318448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.318477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.323332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.323638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.323675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.328512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.328816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.328844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.333750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.334043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.334071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.338934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.339228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.339257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.344180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.344474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.344502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.349356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.349667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.349698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.354618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.354920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.354948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.359912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.360207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.360235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.365141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.365439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.365468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.370353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.370657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.370685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.375558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.375872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.375894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.380791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.381084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.381110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.385961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.386253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.386281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.391187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.391485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.391514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.396377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.396684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.396707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.401603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.401894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.401922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.406846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.407140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.407169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.412073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.412368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.412396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.417311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.417619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.417642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.422494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.422800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.422827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.427725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.428023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.428055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.432934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.433235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.433262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.438138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.438430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.438458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.998 [2024-07-12 11:41:50.443274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:46.998 [2024-07-12 11:41:50.443569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.998 [2024-07-12 11:41:50.443608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.257 [2024-07-12 11:41:50.448499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.257 [2024-07-12 11:41:50.448802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-12 11:41:50.448829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.257 [2024-07-12 11:41:50.453764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.257 [2024-07-12 11:41:50.454057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-12 11:41:50.454086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.257 [2024-07-12 11:41:50.459050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.257 [2024-07-12 11:41:50.459339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-12 11:41:50.459367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.257 [2024-07-12 11:41:50.464370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.257 [2024-07-12 11:41:50.464691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-12 11:41:50.464718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.257 [2024-07-12 11:41:50.469680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.257 [2024-07-12 11:41:50.469983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-12 11:41:50.470010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.257 [2024-07-12 11:41:50.474857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.257 [2024-07-12 11:41:50.475147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.257 [2024-07-12 11:41:50.475169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.257 [2024-07-12 11:41:50.480053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.480342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.480370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.485221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.485511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.485539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.490435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.490738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.490761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.495621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.495925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.495955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.500741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.501031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.501059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.505985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.506273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.506301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.511162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.511454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.511482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.516330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.516635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.516663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.521514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.521822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.521849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.526718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.527012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.527040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.531919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.532221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.532248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.537101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.537403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.537431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.542307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.542614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.542636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.547465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.547782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.547811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.552720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.553013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.553041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.557922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.558217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.558246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.563137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.563429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.563458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.568410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.568726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.568754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.573615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.573904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.573931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.578818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.579107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.579134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.584031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.584325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.584352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.589241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.589535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.589563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.594409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.594727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.594754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.599739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.600033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.600062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.605016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.605310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.605338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.610308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.610610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.610637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.615493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.615824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.615852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.620698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.620989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.621016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.625946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.258 [2024-07-12 11:41:50.626239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.258 [2024-07-12 11:41:50.626267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.258 [2024-07-12 11:41:50.631229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.631524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.631552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.636518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.636824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.636852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.641756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.642049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.642071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.646870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.647167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.647195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.652094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.652383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.652411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.657395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.657701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.657729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.662701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.662994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.663021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.667866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.668156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.668184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.673065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.673374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.673404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.678320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.678640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.678668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.683571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.683902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.683929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.688814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.689124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.689151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.694065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.694354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.694381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.259 [2024-07-12 11:41:50.699262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.259 [2024-07-12 11:41:50.699567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-07-12 11:41:50.699604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.518 [2024-07-12 11:41:50.704505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.518 [2024-07-12 11:41:50.704812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-12 11:41:50.704840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.518 [2024-07-12 11:41:50.709687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.518 [2024-07-12 11:41:50.709977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-12 11:41:50.710004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.518 [2024-07-12 11:41:50.714831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.518 [2024-07-12 11:41:50.715126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-12 11:41:50.715153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.518 [2024-07-12 11:41:50.720044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.518 [2024-07-12 11:41:50.720340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-12 11:41:50.720368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.518 [2024-07-12 11:41:50.725321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.518 [2024-07-12 11:41:50.725640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-12 11:41:50.725668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.518 [2024-07-12 11:41:50.730666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.518 [2024-07-12 11:41:50.730976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-12 11:41:50.731003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.518 [2024-07-12 11:41:50.735996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.518 [2024-07-12 11:41:50.736290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-12 11:41:50.736317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.518 [2024-07-12 11:41:50.741187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.518 [2024-07-12 11:41:50.741483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.518 [2024-07-12 11:41:50.741512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.518 [2024-07-12 11:41:50.746443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.518 [2024-07-12 11:41:50.746744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.746772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.751694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.751985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.752013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.756954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.757253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.757281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.762220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.762514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.762542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.767496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.767807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.767830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.772766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.773062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.773090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.777956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.778245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.778277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.783220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.783525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.783568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.788549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.788874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.788902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.793793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.794098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.794126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.799058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.799373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.799401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.804309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.804611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.804638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.809514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.809818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.809847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.814745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.815039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.815067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.820042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.820332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.820360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.825232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.825527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.825555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.830413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.830715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.830742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.835594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.835894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.519 [2024-07-12 11:41:50.835922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.519 [2024-07-12 11:41:50.840810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.519 [2024-07-12 11:41:50.841108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.841135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.845954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.846244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.846271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.851083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.851374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.851401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.856270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.856563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.856602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.861470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.861776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.861808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.866764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.867055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.867087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.872012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.872307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.872334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.877267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.877558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.877600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.882576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.882906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.882933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.887919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.888212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.888239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.893230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.893536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.893562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.898602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.898939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.898966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.903898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.904192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.904219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.909166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.909472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.909500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.914466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.914788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.914817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.919697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.919995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.920023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.924851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.925147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.925175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.930008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.930304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.930333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.935222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.935514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.520 [2024-07-12 11:41:50.935543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.520 [2024-07-12 11:41:50.940418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.520 [2024-07-12 11:41:50.940721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.521 [2024-07-12 11:41:50.940749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.521 [2024-07-12 11:41:50.945674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.521 [2024-07-12 11:41:50.945961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.521 [2024-07-12 11:41:50.945988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.521 [2024-07-12 11:41:50.950862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.521 [2024-07-12 11:41:50.951156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.521 [2024-07-12 11:41:50.951183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.521 [2024-07-12 11:41:50.956053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.521 [2024-07-12 11:41:50.956342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.521 [2024-07-12 11:41:50.956371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.521 [2024-07-12 11:41:50.961228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb8c500) with pdu=0x2000190fef90 00:17:47.521 [2024-07-12 11:41:50.961518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.521 [2024-07-12 11:41:50.961546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.780 00:17:47.780 Latency(us) 00:17:47.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.780 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:47.781 nvme0n1 : 2.00 5927.41 740.93 0.00 0.00 2693.38 2159.71 5749.29 00:17:47.781 =================================================================================================================== 00:17:47.781 Total : 5927.41 740.93 0.00 0.00 2693.38 2159.71 5749.29 00:17:47.781 0 00:17:47.781 11:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:47.781 11:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:47.781 11:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:47.781 11:41:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:47.781 | .driver_specific 00:17:47.781 | .nvme_error 00:17:47.781 | .status_code 00:17:47.781 | .command_transient_transport_error' 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 382 > 0 )) 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80672 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80672 ']' 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80672 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80672 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:48.040 killing process with pid 80672 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80672' 00:17:48.040 Received shutdown signal, test time was about 2.000000 seconds 00:17:48.040 00:17:48.040 Latency(us) 00:17:48.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.040 =================================================================================================================== 00:17:48.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80672 00:17:48.040 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80672 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80460 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80460 ']' 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80460 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80460 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:48.298 killing process with pid 80460 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80460' 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80460 00:17:48.298 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80460 00:17:48.556 00:17:48.556 real 0m18.390s 00:17:48.556 user 0m35.499s 00:17:48.556 sys 0m4.660s 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:48.556 ************************************ 00:17:48.556 END TEST nvmf_digest_error 00:17:48.556 ************************************ 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.556 rmmod nvme_tcp 00:17:48.556 rmmod nvme_fabrics 00:17:48.556 rmmod nvme_keyring 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80460 ']' 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80460 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80460 ']' 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80460 00:17:48.556 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80460) - No such process 00:17:48.556 Process with pid 80460 is not found 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80460 is not found' 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:48.556 00:17:48.556 real 0m38.349s 00:17:48.556 user 1m12.513s 00:17:48.556 sys 0m10.272s 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.556 11:41:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:48.556 ************************************ 00:17:48.556 END TEST nvmf_digest 00:17:48.556 ************************************ 00:17:48.556 11:41:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:48.556 11:41:51 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:17:48.556 11:41:51 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:17:48.556 11:41:51 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:48.556 11:41:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:48.556 11:41:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.556 11:41:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:48.556 ************************************ 00:17:48.556 START TEST nvmf_host_multipath 00:17:48.556 ************************************ 00:17:48.556 11:41:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:48.814 * Looking for test storage... 00:17:48.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:17:48.814 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:48.815 Cannot find device "nvmf_tgt_br" 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.815 Cannot find device "nvmf_tgt_br2" 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:48.815 Cannot find device "nvmf_tgt_br" 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:48.815 Cannot find device "nvmf_tgt_br2" 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:48.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:48.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:48.815 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:49.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:17:49.075 00:17:49.075 --- 10.0.0.2 ping statistics --- 00:17:49.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.075 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:49.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:49.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:49.075 00:17:49.075 --- 10.0.0.3 ping statistics --- 00:17:49.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.075 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:49.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:49.075 00:17:49.075 --- 10.0.0.1 ping statistics --- 00:17:49.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.075 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80934 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80934 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80934 ']' 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.075 11:41:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:49.075 [2024-07-12 11:41:52.487776] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:17:49.075 [2024-07-12 11:41:52.488540] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.332 [2024-07-12 11:41:52.630427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:49.332 [2024-07-12 11:41:52.759557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.332 [2024-07-12 11:41:52.759626] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.332 [2024-07-12 11:41:52.759641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.332 [2024-07-12 11:41:52.759651] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.332 [2024-07-12 11:41:52.759672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.332 [2024-07-12 11:41:52.760463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.332 [2024-07-12 11:41:52.760527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.591 [2024-07-12 11:41:52.817078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:50.156 11:41:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.156 11:41:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:50.156 11:41:53 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.156 11:41:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:50.156 11:41:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:50.156 11:41:53 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.156 11:41:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80934 00:17:50.156 11:41:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:50.414 [2024-07-12 11:41:53.686969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.414 11:41:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:50.673 Malloc0 00:17:50.673 11:41:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:50.931 11:41:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.189 11:41:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.448 [2024-07-12 11:41:54.675855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.448 11:41:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:51.727 [2024-07-12 11:41:54.951867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80988 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80988 /var/tmp/bdevperf.sock 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80988 ']' 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.727 11:41:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:52.006 11:41:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.006 11:41:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:52.006 11:41:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:52.263 11:41:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:52.828 Nvme0n1 00:17:52.828 11:41:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:53.087 Nvme0n1 00:17:53.087 11:41:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:53.087 11:41:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:54.022 11:41:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:54.022 11:41:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:54.282 11:41:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:54.540 11:41:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:54.540 11:41:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:54.540 11:41:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81027 00:17:54.540 11:41:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:01.098 11:42:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:01.098 11:42:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:01.098 Attaching 4 probes... 00:18:01.098 @path[10.0.0.2, 4421]: 17441 00:18:01.098 @path[10.0.0.2, 4421]: 17786 00:18:01.098 @path[10.0.0.2, 4421]: 17807 00:18:01.098 @path[10.0.0.2, 4421]: 17608 00:18:01.098 @path[10.0.0.2, 4421]: 17856 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81027 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:01.098 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:01.401 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:01.401 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81144 00:18:01.401 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:01.401 11:42:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:08.026 11:42:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:08.026 11:42:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:08.026 11:42:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:08.026 11:42:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.026 Attaching 4 probes... 00:18:08.026 @path[10.0.0.2, 4420]: 17463 00:18:08.026 @path[10.0.0.2, 4420]: 17793 00:18:08.026 @path[10.0.0.2, 4420]: 17984 00:18:08.026 @path[10.0.0.2, 4420]: 18132 00:18:08.026 @path[10.0.0.2, 4420]: 18216 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81144 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:08.026 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:08.286 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:08.286 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81252 00:18:08.286 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:08.286 11:42:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.851 Attaching 4 probes... 00:18:14.851 @path[10.0.0.2, 4421]: 13618 00:18:14.851 @path[10.0.0.2, 4421]: 17476 00:18:14.851 @path[10.0.0.2, 4421]: 17591 00:18:14.851 @path[10.0.0.2, 4421]: 17601 00:18:14.851 @path[10.0.0.2, 4421]: 17555 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81252 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:14.851 11:42:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:14.851 11:42:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:15.109 11:42:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:15.109 11:42:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:15.109 11:42:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81370 00:18:15.109 11:42:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:21.738 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:21.738 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:21.738 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.739 Attaching 4 probes... 00:18:21.739 00:18:21.739 00:18:21.739 00:18:21.739 00:18:21.739 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81370 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:21.739 11:42:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:21.739 11:42:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:21.739 11:42:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81483 00:18:21.739 11:42:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:21.739 11:42:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.339 Attaching 4 probes... 00:18:28.339 @path[10.0.0.2, 4421]: 16842 00:18:28.339 @path[10.0.0.2, 4421]: 16978 00:18:28.339 @path[10.0.0.2, 4421]: 17152 00:18:28.339 @path[10.0.0.2, 4421]: 17114 00:18:28.339 @path[10.0.0.2, 4421]: 17141 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81483 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:28.339 11:42:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:29.271 11:42:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:29.271 11:42:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81605 00:18:29.271 11:42:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:29.271 11:42:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.851 Attaching 4 probes... 00:18:35.851 @path[10.0.0.2, 4420]: 17161 00:18:35.851 @path[10.0.0.2, 4420]: 17446 00:18:35.851 @path[10.0.0.2, 4420]: 17432 00:18:35.851 @path[10.0.0.2, 4420]: 17520 00:18:35.851 @path[10.0.0.2, 4420]: 17528 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81605 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.851 11:42:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:35.851 [2024-07-12 11:42:39.186653] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:35.851 11:42:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:36.108 11:42:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:42.665 11:42:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:42.665 11:42:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81781 00:18:42.665 11:42:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:42.665 11:42:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.240 Attaching 4 probes... 00:18:49.240 @path[10.0.0.2, 4421]: 16922 00:18:49.240 @path[10.0.0.2, 4421]: 17014 00:18:49.240 @path[10.0.0.2, 4421]: 17226 00:18:49.240 @path[10.0.0.2, 4421]: 17008 00:18:49.240 @path[10.0.0.2, 4421]: 17408 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81781 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80988 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80988 ']' 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80988 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80988 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:49.240 killing process with pid 80988 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80988' 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80988 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80988 00:18:49.240 Connection closed with partial response: 00:18:49.240 00:18:49.240 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80988 00:18:49.240 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:49.240 [2024-07-12 11:41:55.018918] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:18:49.240 [2024-07-12 11:41:55.019024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80988 ] 00:18:49.240 [2024-07-12 11:41:55.152884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.240 [2024-07-12 11:41:55.282345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.240 [2024-07-12 11:41:55.338366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:49.240 Running I/O for 90 seconds... 00:18:49.240 [2024-07-12 11:42:04.693217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.240 [2024-07-12 11:42:04.693294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:49.240 [2024-07-12 11:42:04.693357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.240 [2024-07-12 11:42:04.693379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:49.240 [2024-07-12 11:42:04.693402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.240 [2024-07-12 11:42:04.693418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.693453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.693488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.693524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.693559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.693615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.693652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.693688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.693759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.693799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.693834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.693870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.693905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.693940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.693975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.693997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.694424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.694462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.694498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.694534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.694570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.694624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.694660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.241 [2024-07-12 11:42:04.694695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.694981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.694995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.695016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.695030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.695051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.695066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.695087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.695102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.695123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.241 [2024-07-12 11:42:04.695138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:49.241 [2024-07-12 11:42:04.695159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.695635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.695672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.695721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.695768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.695805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.695843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.695879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.695915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.695972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.695987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.242 [2024-07-12 11:42:04.696508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.696544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.696590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.696629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.696673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.696711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.242 [2024-07-12 11:42:04.696747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:49.242 [2024-07-12 11:42:04.696768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.696783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.696804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.696818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.696839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.696853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.696875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.696889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.696910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.696924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.696945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.696960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.696987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.697001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.697023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.697038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.697059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.697073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.697094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.697108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.697137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.697152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.697173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.697188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.697210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.697224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.697246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.697260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.697282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.697296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.698894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.698925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.698955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.698972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.698993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.243 [2024-07-12 11:42:04.699009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:04.699841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:04.699856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:11.227489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:11.227561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:11.227634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:11.227656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:11.227679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.243 [2024-07-12 11:42:11.227704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:49.243 [2024-07-12 11:42:11.227728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.227742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.227762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.227776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.227797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.227811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.227831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.227845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.227866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.227901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.227930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.227946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.227966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.227981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.244 [2024-07-12 11:42:11.228083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.244 [2024-07-12 11:42:11.228117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.244 [2024-07-12 11:42:11.228151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.244 [2024-07-12 11:42:11.228185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.244 [2024-07-12 11:42:11.228220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.244 [2024-07-12 11:42:11.228255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.244 [2024-07-12 11:42:11.228289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.244 [2024-07-12 11:42:11.228323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.244 [2024-07-12 11:42:11.228976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:49.244 [2024-07-12 11:42:11.228997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.229045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.229079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.229114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.229148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.229182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.229216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.229259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.229293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.245 [2024-07-12 11:42:11.229973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.229994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.245 [2024-07-12 11:42:11.230507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:49.245 [2024-07-12 11:42:11.230528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.230542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.230974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.230988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.231022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.231066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.231100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.231659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.231673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.232403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.232455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.232499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.232543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.232603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.232659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.232704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.232759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.232824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.246 [2024-07-12 11:42:11.232869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.246 [2024-07-12 11:42:11.232912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:49.246 [2024-07-12 11:42:11.232941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:11.232955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:11.232984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:11.232999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:11.233028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:11.233042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:11.233072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:11.233087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:11.233116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:11.233131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:11.233159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:11.233174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:11.233204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:11.233218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.327486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.327560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.327645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.327742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.327769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.327784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.327805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.327819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.327844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.327858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.327880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.327894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.327914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.327928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.327949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.327963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.327984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.327998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:18.328308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:18.328345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:18.328379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:18.328412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:18.328446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:18.328480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:18.328514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.247 [2024-07-12 11:42:18.328548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.328977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.328998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.329013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.329034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.247 [2024-07-12 11:42:18.329049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:49.247 [2024-07-12 11:42:18.329069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.329421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.329455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.329490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.329524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.329558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.329610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.329666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.329704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.329958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.329973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.248 [2024-07-12 11:42:18.330376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.330410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:49.248 [2024-07-12 11:42:18.330430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.248 [2024-07-12 11:42:18.330444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.330478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.330512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.330546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.330588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.330639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.330673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.330709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.330743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.330777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.330812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.330846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.330880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.330915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.330948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.330969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.330983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.331025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.331066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.331101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.331135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.331169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.331203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.331237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.331272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.331305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.331876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.331890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.332685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.249 [2024-07-12 11:42:18.332713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.332758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.249 [2024-07-12 11:42:18.332775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:49.249 [2024-07-12 11:42:18.332805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.332819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.332848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.332864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.332892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.332906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.332934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.332965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.332995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.333038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.333097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.333147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.333191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.333234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.333278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.333334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.333380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:18.333438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:18.333452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.250 [2024-07-12 11:42:31.672493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.250 [2024-07-12 11:42:31.672541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.250 [2024-07-12 11:42:31.672567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.250 [2024-07-12 11:42:31.672604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c3100 is same with the state(5) to be set 00:18:49.250 [2024-07-12 11:42:31.672701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.250 [2024-07-12 11:42:31.672723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.250 [2024-07-12 11:42:31.672761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.250 [2024-07-12 11:42:31.672789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.250 [2024-07-12 11:42:31.672816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.250 [2024-07-12 11:42:31.672843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.250 [2024-07-12 11:42:31.672898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.250 [2024-07-12 11:42:31.672927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.250 [2024-07-12 11:42:31.672971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.672986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.672999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.250 [2024-07-12 11:42:31.673450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.250 [2024-07-12 11:42:31.673463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.673490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.673517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.673544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.673977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.673990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.251 [2024-07-12 11:42:31.674477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.251 [2024-07-12 11:42:31.674767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.251 [2024-07-12 11:42:31.674788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.674801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.674816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.674828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.674843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.674855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.674870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.674883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.674897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.674910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.674925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.674953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.674968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.674981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.674996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.252 [2024-07-12 11:42:31.675830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: AB 11:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.252 ORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.675975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.675987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.676002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.676015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.252 [2024-07-12 11:42:31.676030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.252 [2024-07-12 11:42:31.676043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.253 [2024-07-12 11:42:31.676071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.253 [2024-07-12 11:42:31.676502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.253 [2024-07-12 11:42:31.676565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.253 [2024-07-12 11:42:31.676576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65448 len:8 PRP1 0x0 PRP2 0x0 00:18:49.253 [2024-07-12 11:42:31.676600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.253 [2024-07-12 11:42:31.676664] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7496d0 was disconnected and freed. reset controller. 00:18:49.253 [2024-07-12 11:42:31.677863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:49.253 [2024-07-12 11:42:31.677903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c3100 (9): Bad file descriptor 00:18:49.253 [2024-07-12 11:42:31.678272] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.253 [2024-07-12 11:42:31.678302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c3100 with addr=10.0.0.2, port=4421 00:18:49.253 [2024-07-12 11:42:31.678318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c3100 is same with the state(5) to be set 00:18:49.253 [2024-07-12 11:42:31.678389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c3100 (9): Bad file descriptor 00:18:49.253 [2024-07-12 11:42:31.678533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:49.253 [2024-07-12 11:42:31.678558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:49.253 [2024-07-12 11:42:31.678574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:49.253 [2024-07-12 11:42:31.678644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:49.253 [2024-07-12 11:42:31.678665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:49.253 [2024-07-12 11:42:41.746740] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:49.253 Received shutdown signal, test time was about 55.269345 seconds 00:18:49.253 00:18:49.253 Latency(us) 00:18:49.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.253 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.253 Verification LBA range: start 0x0 length 0x4000 00:18:49.253 Nvme0n1 : 55.27 7437.67 29.05 0.00 0.00 17175.65 126.60 7046430.72 00:18:49.253 =================================================================================================================== 00:18:49.253 Total : 7437.67 29.05 0.00 0.00 17175.65 126.60 7046430.72 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.253 rmmod nvme_tcp 00:18:49.253 rmmod nvme_fabrics 00:18:49.253 rmmod nvme_keyring 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80934 ']' 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80934 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80934 ']' 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80934 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80934 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:49.253 killing process with pid 80934 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80934' 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80934 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80934 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.253 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.254 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.254 11:42:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:49.254 00:18:49.254 real 1m0.649s 00:18:49.254 user 2m47.232s 00:18:49.254 sys 0m18.782s 00:18:49.254 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.254 ************************************ 00:18:49.254 11:42:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:49.254 END TEST nvmf_host_multipath 00:18:49.254 ************************************ 00:18:49.514 11:42:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:49.514 11:42:52 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:49.514 11:42:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:49.514 11:42:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.514 11:42:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.514 ************************************ 00:18:49.514 START TEST nvmf_timeout 00:18:49.514 ************************************ 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:49.514 * Looking for test storage... 00:18:49.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:49.514 Cannot find device "nvmf_tgt_br" 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.514 Cannot find device "nvmf_tgt_br2" 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:49.514 Cannot find device "nvmf_tgt_br" 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:49.514 Cannot find device "nvmf_tgt_br2" 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:49.514 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.515 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.515 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.773 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.773 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.773 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.773 11:42:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:49.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:18:49.773 00:18:49.773 --- 10.0.0.2 ping statistics --- 00:18:49.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.773 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:49.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:18:49.773 00:18:49.773 --- 10.0.0.3 ping statistics --- 00:18:49.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.773 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:49.773 00:18:49.773 --- 10.0.0.1 ping statistics --- 00:18:49.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.773 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82082 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82082 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82082 ']' 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.773 11:42:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:49.773 [2024-07-12 11:42:53.208998] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:18:49.773 [2024-07-12 11:42:53.209093] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.031 [2024-07-12 11:42:53.346366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:50.031 [2024-07-12 11:42:53.459653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.031 [2024-07-12 11:42:53.459714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.031 [2024-07-12 11:42:53.459725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.031 [2024-07-12 11:42:53.459776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.031 [2024-07-12 11:42:53.459790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.031 [2024-07-12 11:42:53.459938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.031 [2024-07-12 11:42:53.459950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.289 [2024-07-12 11:42:53.513334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:50.855 11:42:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.856 11:42:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:50.856 11:42:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.856 11:42:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:50.856 11:42:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:50.856 11:42:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.856 11:42:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.856 11:42:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:51.115 [2024-07-12 11:42:54.441718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.115 11:42:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:51.374 Malloc0 00:18:51.374 11:42:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.632 11:42:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.890 11:42:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.148 [2024-07-12 11:42:55.446495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.148 11:42:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82131 00:18:52.148 11:42:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:52.148 11:42:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82131 /var/tmp/bdevperf.sock 00:18:52.148 11:42:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82131 ']' 00:18:52.148 11:42:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.148 11:42:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.148 11:42:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.148 11:42:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.148 11:42:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:52.148 [2024-07-12 11:42:55.512562] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:18:52.148 [2024-07-12 11:42:55.512650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82131 ] 00:18:52.406 [2024-07-12 11:42:55.650864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.406 [2024-07-12 11:42:55.765877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.406 [2024-07-12 11:42:55.825231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:53.338 11:42:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.338 11:42:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:53.338 11:42:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:53.338 11:42:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:53.905 NVMe0n1 00:18:53.905 11:42:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82159 00:18:53.905 11:42:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:53.905 11:42:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:53.905 Running I/O for 10 seconds... 00:18:54.839 11:42:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.099 [2024-07-12 11:42:58.366662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.366984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.366994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.367005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.367015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.099 [2024-07-12 11:42:58.367028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.099 [2024-07-12 11:42:58.367037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.367806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.367866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.367998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.100 [2024-07-12 11:42:58.368522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.368542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.368562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.368596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.100 [2024-07-12 11:42:58.368618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.100 [2024-07-12 11:42:58.368629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.368879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.368890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.368899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.369815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.369830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.369842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.369851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.369862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.369872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.369883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.369893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.369904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.369913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.369924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.369933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.369945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.369954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.369966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.369975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.369986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.369995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.370016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.370037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.370057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.101 [2024-07-12 11:42:58.370079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.370100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.370120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.370141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.370161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.370182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.370203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.101 [2024-07-12 11:42:58.370224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f84d0 is same with the state(5) to be set 00:18:55.101 [2024-07-12 11:42:58.370249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.101 [2024-07-12 11:42:58.370257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.101 [2024-07-12 11:42:58.370275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78648 len:8 PRP1 0x0 PRP2 0x0 00:18:55.101 [2024-07-12 11:42:58.370284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.101 [2024-07-12 11:42:58.370311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.101 [2024-07-12 11:42:58.370319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 PRP1 0x0 PRP2 0x0 00:18:55.101 [2024-07-12 11:42:58.370328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.101 [2024-07-12 11:42:58.370346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.101 [2024-07-12 11:42:58.370354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79256 len:8 PRP1 0x0 PRP2 0x0 00:18:55.101 [2024-07-12 11:42:58.370363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.101 [2024-07-12 11:42:58.370380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.101 [2024-07-12 11:42:58.370388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:18:55.101 [2024-07-12 11:42:58.370397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.101 [2024-07-12 11:42:58.370423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.101 [2024-07-12 11:42:58.370431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79272 len:8 PRP1 0x0 PRP2 0x0 00:18:55.101 [2024-07-12 11:42:58.370439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.101 [2024-07-12 11:42:58.370449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.101 [2024-07-12 11:42:58.370456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79288 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79296 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79304 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79312 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78656 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78664 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78672 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78680 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.370971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.370978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.370986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78688 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.370996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.371005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.371012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.371019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78696 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.371028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.371037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.371044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.371052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78704 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 [2024-07-12 11:42:58.371061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.371070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.102 [2024-07-12 11:42:58.371082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.102 [2024-07-12 11:42:58.371090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78712 len:8 PRP1 0x0 PRP2 0x0 00:18:55.102 11:42:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:55.102 [2024-07-12 11:42:58.385364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.385517] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20f84d0 was disconnected and freed. reset controller. 00:18:55.102 [2024-07-12 11:42:58.385732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.102 [2024-07-12 11:42:58.385759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.385777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.102 [2024-07-12 11:42:58.385792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.385806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.102 [2024-07-12 11:42:58.385820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.385833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:55.102 [2024-07-12 11:42:58.385847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.102 [2024-07-12 11:42:58.385861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20add40 is same with the state(5) to be set 00:18:55.102 [2024-07-12 11:42:58.386195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:55.102 [2024-07-12 11:42:58.386225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20add40 (9): Bad file descriptor 00:18:55.102 [2024-07-12 11:42:58.386357] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.102 [2024-07-12 11:42:58.386387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20add40 with addr=10.0.0.2, port=4420 00:18:55.102 [2024-07-12 11:42:58.386409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20add40 is same with the state(5) to be set 00:18:55.102 [2024-07-12 11:42:58.386442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20add40 (9): Bad file descriptor 00:18:55.102 [2024-07-12 11:42:58.386464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.102 [2024-07-12 11:42:58.386478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:55.102 [2024-07-12 11:42:58.386493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.102 [2024-07-12 11:42:58.386520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:55.102 [2024-07-12 11:42:58.386535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:57.004 [2024-07-12 11:43:00.386809] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:57.004 [2024-07-12 11:43:00.386908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20add40 with addr=10.0.0.2, port=4420 00:18:57.004 [2024-07-12 11:43:00.386925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20add40 is same with the state(5) to be set 00:18:57.004 [2024-07-12 11:43:00.386952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20add40 (9): Bad file descriptor 00:18:57.004 [2024-07-12 11:43:00.386971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:57.004 [2024-07-12 11:43:00.386984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:57.004 [2024-07-12 11:43:00.386996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:57.004 [2024-07-12 11:43:00.387024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:57.004 [2024-07-12 11:43:00.387036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:57.004 11:43:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:57.005 11:43:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.005 11:43:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:57.264 11:43:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:57.264 11:43:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:57.264 11:43:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:57.264 11:43:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:57.522 11:43:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:57.522 11:43:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:59.421 [2024-07-12 11:43:02.387260] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.421 [2024-07-12 11:43:02.387334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20add40 with addr=10.0.0.2, port=4420 00:18:59.421 [2024-07-12 11:43:02.387352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20add40 is same with the state(5) to be set 00:18:59.421 [2024-07-12 11:43:02.387379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20add40 (9): Bad file descriptor 00:18:59.421 [2024-07-12 11:43:02.387398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.421 [2024-07-12 11:43:02.387408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:59.421 [2024-07-12 11:43:02.387419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.421 [2024-07-12 11:43:02.387447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:59.421 [2024-07-12 11:43:02.387459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:01.319 [2024-07-12 11:43:04.387607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:01.319 [2024-07-12 11:43:04.387688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:01.319 [2024-07-12 11:43:04.387701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:01.319 [2024-07-12 11:43:04.387712] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:01.319 [2024-07-12 11:43:04.387750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:02.250 00:19:02.250 Latency(us) 00:19:02.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.250 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:02.250 Verification LBA range: start 0x0 length 0x4000 00:19:02.250 NVMe0n1 : 8.19 1196.11 4.67 15.64 0.00 105668.99 3678.95 7046430.72 00:19:02.250 =================================================================================================================== 00:19:02.250 Total : 1196.11 4.67 15.64 0.00 105668.99 3678.95 7046430.72 00:19:02.250 0 00:19:02.506 11:43:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:02.506 11:43:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:02.506 11:43:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:02.763 11:43:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:02.763 11:43:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:02.763 11:43:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:03.025 11:43:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:03.025 11:43:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:03.025 11:43:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82159 00:19:03.025 11:43:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82131 00:19:03.025 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82131 ']' 00:19:03.025 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82131 00:19:03.025 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:03.025 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:03.025 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82131 00:19:03.283 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:03.283 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:03.283 killing process with pid 82131 00:19:03.283 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82131' 00:19:03.283 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82131 00:19:03.283 11:43:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82131 00:19:03.283 Received shutdown signal, test time was about 9.278788 seconds 00:19:03.283 00:19:03.283 Latency(us) 00:19:03.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.283 =================================================================================================================== 00:19:03.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.283 11:43:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:03.849 [2024-07-12 11:43:06.991581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.849 11:43:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82281 00:19:03.849 11:43:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:03.849 11:43:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82281 /var/tmp/bdevperf.sock 00:19:03.849 11:43:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82281 ']' 00:19:03.849 11:43:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.849 11:43:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:03.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.849 11:43:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.849 11:43:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:03.849 11:43:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:03.849 [2024-07-12 11:43:07.066227] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:19:03.849 [2024-07-12 11:43:07.066326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82281 ] 00:19:03.849 [2024-07-12 11:43:07.199975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.106 [2024-07-12 11:43:07.317602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.107 [2024-07-12 11:43:07.372339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:04.673 11:43:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.673 11:43:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:04.673 11:43:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:04.931 11:43:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:05.188 NVMe0n1 00:19:05.188 11:43:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.188 11:43:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82300 00:19:05.188 11:43:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:05.471 Running I/O for 10 seconds... 00:19:06.403 11:43:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:06.664 [2024-07-12 11:43:09.867808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.867855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.867884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.867896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.867908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.867918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.867930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.867940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.867951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.867961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.867972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.867981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.867992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.664 [2024-07-12 11:43:09.868198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.664 [2024-07-12 11:43:09.868531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.664 [2024-07-12 11:43:09.868540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.868889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.868910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.868931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.868957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.868978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.868990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.868999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.869250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.869271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.869292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.869433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.869455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.869476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.869774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.665 [2024-07-12 11:43:09.869808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.665 [2024-07-12 11:43:09.869852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.665 [2024-07-12 11:43:09.869863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.869873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.869884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.869895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.869907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.869917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.869929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.869938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.869950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.869960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.869972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.869981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.869993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.666 [2024-07-12 11:43:09.870331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.666 [2024-07-12 11:43:09.870662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23954d0 is same with the state(5) to be set 00:19:06.666 [2024-07-12 11:43:09.870687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.666 [2024-07-12 11:43:09.870695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.666 [2024-07-12 11:43:09.870704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84504 len:8 PRP1 0x0 PRP2 0x0 00:19:06.666 [2024-07-12 11:43:09.870714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.666 [2024-07-12 11:43:09.870733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.666 [2024-07-12 11:43:09.870741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84960 len:8 PRP1 0x0 PRP2 0x0 00:19:06.666 [2024-07-12 11:43:09.870751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.666 [2024-07-12 11:43:09.870768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.666 [2024-07-12 11:43:09.870785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84968 len:8 PRP1 0x0 PRP2 0x0 00:19:06.666 [2024-07-12 11:43:09.870794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.666 [2024-07-12 11:43:09.870804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.666 [2024-07-12 11:43:09.870814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.870822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84976 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.870831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.870840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.870847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.870855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84984 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.870864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.870873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.870880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.870888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84992 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.870897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.870906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.870914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.870930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85000 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.870939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.870949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.870957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.870965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85008 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.870975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.870984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.870992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.871001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85016 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.871010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.871020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.871028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.871036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85024 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.871044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.871054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.871061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.871077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85032 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.871085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.871094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.871102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.871109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85040 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.871118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.871128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.871135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.871149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85048 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.871157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.871167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.871174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.871181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85056 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.871190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.871200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.871214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.871227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85064 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.871236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.871246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.871253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.871261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85072 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.871270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.871280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:06.667 [2024-07-12 11:43:09.871288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:06.667 [2024-07-12 11:43:09.871296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85080 len:8 PRP1 0x0 PRP2 0x0 00:19:06.667 [2024-07-12 11:43:09.871305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.667 [2024-07-12 11:43:09.871358] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23954d0 was disconnected and freed. reset controller. 00:19:06.667 [2024-07-12 11:43:09.872764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:06.667 [2024-07-12 11:43:09.872861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234ad40 (9): Bad file descriptor 00:19:06.667 [2024-07-12 11:43:09.872973] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.667 [2024-07-12 11:43:09.872993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234ad40 with addr=10.0.0.2, port=4420 00:19:06.667 [2024-07-12 11:43:09.873004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234ad40 is same with the state(5) to be set 00:19:06.667 [2024-07-12 11:43:09.873022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234ad40 (9): Bad file descriptor 00:19:06.667 [2024-07-12 11:43:09.873038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:06.667 [2024-07-12 11:43:09.873047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:06.667 [2024-07-12 11:43:09.873058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:06.667 [2024-07-12 11:43:09.873079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.667 [2024-07-12 11:43:09.873090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:06.667 11:43:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:07.602 [2024-07-12 11:43:10.873242] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.602 [2024-07-12 11:43:10.873313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234ad40 with addr=10.0.0.2, port=4420 00:19:07.602 [2024-07-12 11:43:10.873330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234ad40 is same with the state(5) to be set 00:19:07.602 [2024-07-12 11:43:10.873357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234ad40 (9): Bad file descriptor 00:19:07.602 [2024-07-12 11:43:10.873376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:07.602 [2024-07-12 11:43:10.873386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:07.602 [2024-07-12 11:43:10.873396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:07.602 [2024-07-12 11:43:10.873424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.602 [2024-07-12 11:43:10.873437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.602 11:43:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.860 [2024-07-12 11:43:11.120894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.860 11:43:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82300 00:19:08.793 [2024-07-12 11:43:11.884579] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:15.345 00:19:15.345 Latency(us) 00:19:15.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.345 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:15.345 Verification LBA range: start 0x0 length 0x4000 00:19:15.345 NVMe0n1 : 10.01 6542.15 25.56 0.00 0.00 19522.52 1549.03 3019898.88 00:19:15.345 =================================================================================================================== 00:19:15.345 Total : 6542.15 25.56 0.00 0.00 19522.52 1549.03 3019898.88 00:19:15.345 0 00:19:15.345 11:43:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82405 00:19:15.345 11:43:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:15.345 11:43:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:15.604 Running I/O for 10 seconds... 00:19:16.551 11:43:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.824 [2024-07-12 11:43:19.994557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.994870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.994879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.995197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.995223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.995236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.995246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.995259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.995268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.995279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.995288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.995299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.824 [2024-07-12 11:43:19.995308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.824 [2024-07-12 11:43:19.995320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.995341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.995476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.995501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.995521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.995542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.995681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.995702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.995833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.995854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.995863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.996856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.996986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.997130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.997234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.997247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.997258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.997268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.997279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.997288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.997570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.825 [2024-07-12 11:43:19.997609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.997623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.997633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.997644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.997664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.997676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.997685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.997697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.997706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.997717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.997726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.998053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.998066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.998077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.998087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.998099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.998108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.998119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.998128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.998139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.998148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.998290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.998302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.998533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.998558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.825 [2024-07-12 11:43:19.998591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.825 [2024-07-12 11:43:19.998603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.998615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.998624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.998636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.998645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.998657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.826 [2024-07-12 11:43:19.998666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.998678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.826 [2024-07-12 11:43:19.998687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.998698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.998708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.826 [2024-07-12 11:43:19.999476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:19.999844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:19.999985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.000113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.000129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.000256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.000272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.000282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.000393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.000406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.000418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.000427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.000558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.000571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.000701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.000846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.000997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.001105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.001121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.001143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.001424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.001664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.001686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.001699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.001710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.001722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.001731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.001742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.001751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.001763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.001771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.826 [2024-07-12 11:43:20.002744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.826 [2024-07-12 11:43:20.002753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.002881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.002978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.002993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.003003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.003015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.003024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.003187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.003550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.003588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.003601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.003614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.003624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.003636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.003646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.003657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.003772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.003792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.004064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.004190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.004209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.004223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.004232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.004511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.004656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.004786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.004805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.004819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.004953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.005185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.005203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.005216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.005225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.005237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.005246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.005528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.005542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.005554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.005564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.005810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.005838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.005853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.005863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.005875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.005884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.006017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.006257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.006282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.827 [2024-07-12 11:43:20.006297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.006309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2393ee0 is same with the state(5) to be set 00:19:16.827 [2024-07-12 11:43:20.006326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.827 [2024-07-12 11:43:20.006604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.827 [2024-07-12 11:43:20.006623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63248 len:8 PRP1 0x0 PRP2 0x0 00:19:16.827 [2024-07-12 11:43:20.006633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.006892] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2393ee0 was disconnected and freed. reset controller. 00:19:16.827 [2024-07-12 11:43:20.007138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.827 [2024-07-12 11:43:20.007163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.007175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.827 [2024-07-12 11:43:20.007184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.007194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.827 [2024-07-12 11:43:20.007203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.007327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.827 [2024-07-12 11:43:20.007345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.827 [2024-07-12 11:43:20.007483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234ad40 is same with the state(5) to be set 00:19:16.827 [2024-07-12 11:43:20.007987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.827 [2024-07-12 11:43:20.008040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234ad40 (9): Bad file descriptor 00:19:16.827 [2024-07-12 11:43:20.008341] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.827 [2024-07-12 11:43:20.008374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234ad40 with addr=10.0.0.2, port=4420 00:19:16.827 [2024-07-12 11:43:20.008387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234ad40 is same with the state(5) to be set 00:19:16.827 [2024-07-12 11:43:20.008408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234ad40 (9): Bad file descriptor 00:19:16.827 [2024-07-12 11:43:20.008708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.827 [2024-07-12 11:43:20.008738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:16.827 [2024-07-12 11:43:20.008750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.827 [2024-07-12 11:43:20.008773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:16.827 [2024-07-12 11:43:20.008784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.827 11:43:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:17.762 [2024-07-12 11:43:21.009193] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.762 [2024-07-12 11:43:21.009271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234ad40 with addr=10.0.0.2, port=4420 00:19:17.762 [2024-07-12 11:43:21.009287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234ad40 is same with the state(5) to be set 00:19:17.762 [2024-07-12 11:43:21.009316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234ad40 (9): Bad file descriptor 00:19:17.762 [2024-07-12 11:43:21.009335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:17.762 [2024-07-12 11:43:21.009345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:17.762 [2024-07-12 11:43:21.009356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:17.762 [2024-07-12 11:43:21.009394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:17.762 [2024-07-12 11:43:21.009406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:18.697 [2024-07-12 11:43:22.009564] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.697 [2024-07-12 11:43:22.009680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234ad40 with addr=10.0.0.2, port=4420 00:19:18.697 [2024-07-12 11:43:22.009697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234ad40 is same with the state(5) to be set 00:19:18.697 [2024-07-12 11:43:22.009725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234ad40 (9): Bad file descriptor 00:19:18.697 [2024-07-12 11:43:22.009744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.697 [2024-07-12 11:43:22.009755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:18.697 [2024-07-12 11:43:22.009765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.697 [2024-07-12 11:43:22.009793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.698 [2024-07-12 11:43:22.009806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:19.633 [2024-07-12 11:43:23.012721] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:19.633 [2024-07-12 11:43:23.012782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234ad40 with addr=10.0.0.2, port=4420 00:19:19.633 [2024-07-12 11:43:23.012798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234ad40 is same with the state(5) to be set 00:19:19.633 [2024-07-12 11:43:23.013261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234ad40 (9): Bad file descriptor 00:19:19.633 [2024-07-12 11:43:23.013694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:19.633 [2024-07-12 11:43:23.013720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:19.633 [2024-07-12 11:43:23.013732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:19.633 11:43:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.633 [2024-07-12 11:43:23.018055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:19.633 [2024-07-12 11:43:23.018090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:19.890 [2024-07-12 11:43:23.280203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.890 11:43:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82405 00:19:20.825 [2024-07-12 11:43:24.057085] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:26.095 00:19:26.095 Latency(us) 00:19:26.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.095 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.095 Verification LBA range: start 0x0 length 0x4000 00:19:26.095 NVMe0n1 : 10.01 5278.78 20.62 3663.07 0.00 14281.30 670.25 3035150.89 00:19:26.095 =================================================================================================================== 00:19:26.095 Total : 5278.78 20.62 3663.07 0.00 14281.30 0.00 3035150.89 00:19:26.095 0 00:19:26.095 11:43:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82281 00:19:26.095 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82281 ']' 00:19:26.095 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82281 00:19:26.095 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:26.095 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.095 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82281 00:19:26.095 killing process with pid 82281 00:19:26.095 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.095 00:19:26.095 Latency(us) 00:19:26.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.096 =================================================================================================================== 00:19:26.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.096 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:26.096 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:26.096 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82281' 00:19:26.096 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82281 00:19:26.096 11:43:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82281 00:19:26.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.096 11:43:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82518 00:19:26.096 11:43:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:26.096 11:43:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82518 /var/tmp/bdevperf.sock 00:19:26.096 11:43:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82518 ']' 00:19:26.096 11:43:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.096 11:43:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.096 11:43:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.096 11:43:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.096 11:43:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:26.096 [2024-07-12 11:43:29.180376] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:19:26.096 [2024-07-12 11:43:29.180463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82518 ] 00:19:26.096 [2024-07-12 11:43:29.315192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.096 [2024-07-12 11:43:29.429528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.096 [2024-07-12 11:43:29.483533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:27.031 11:43:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.031 11:43:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:27.031 11:43:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82530 00:19:27.031 11:43:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82518 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:27.031 11:43:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:27.031 11:43:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:27.289 NVMe0n1 00:19:27.289 11:43:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82577 00:19:27.289 11:43:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.289 11:43:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:27.553 Running I/O for 10 seconds... 00:19:28.501 11:43:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.762 [2024-07-12 11:43:31.958014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.762 [2024-07-12 11:43:31.958189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.763 [2024-07-12 11:43:31.958984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.958993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8b80 is same with the state(5) to be set 00:19:28.764 [2024-07-12 11:43:31.959681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.959718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.959750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.959762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.959791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.959801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.959813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.959822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.959834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.959843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.959854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.959971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.959992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.960944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.960956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.764 [2024-07-12 11:43:31.961944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.764 [2024-07-12 11:43:31.961956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.962089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.962206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.962233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.962247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.962368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.962386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.962395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.962406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.962546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.962682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.962925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.962953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.962964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.962976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.962985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.962996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.963845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.963854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.964066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.964088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.964109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.964245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.964551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.964572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.964608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.964628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.964765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.964776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.965044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.965067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.965077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.965098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.965107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.965123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.965132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.965144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.965153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.965292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.965374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.965389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.965398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.965409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.965419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.965430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.765 [2024-07-12 11:43:31.965551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.765 [2024-07-12 11:43:31.965566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.965693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.965716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.965824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.965846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.965856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.965981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.966877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.966990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.967017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.967158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.967286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.967311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.967449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.967471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.967614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.967739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.967760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.967895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.968000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.968022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.968032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.968043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.968165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.968180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.968300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.968322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.968411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.968428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.968438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.968450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.968459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.968470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.968739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.968858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.968871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.968882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.969006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.969024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.969034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.969045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.969172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.969197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.969316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.969330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.969564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.969594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.969605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.969617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.969626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.969727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.969745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.766 [2024-07-12 11:43:31.969758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.766 [2024-07-12 11:43:31.969774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.969907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.969996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.767 [2024-07-12 11:43:31.970923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.970944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e39310 is same with the state(5) to be set 00:19:28.767 [2024-07-12 11:43:31.971162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.767 [2024-07-12 11:43:31.971181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.767 [2024-07-12 11:43:31.971191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30840 len:8 PRP1 0x0 PRP2 0x0 00:19:28.767 [2024-07-12 11:43:31.971200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.971397] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e39310 was disconnected and freed. reset controller. 00:19:28.767 [2024-07-12 11:43:31.971647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.767 [2024-07-12 11:43:31.971670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.971682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.767 [2024-07-12 11:43:31.971692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.971702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.767 [2024-07-12 11:43:31.971711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.971846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.767 [2024-07-12 11:43:31.971913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.767 [2024-07-12 11:43:31.971923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcac00 is same with the state(5) to be set 00:19:28.767 [2024-07-12 11:43:31.972294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.767 [2024-07-12 11:43:31.972329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcac00 (9): Bad file descriptor 00:19:28.767 [2024-07-12 11:43:31.972602] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.767 [2024-07-12 11:43:31.972633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcac00 with addr=10.0.0.2, port=4420 00:19:28.767 [2024-07-12 11:43:31.972645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcac00 is same with the state(5) to be set 00:19:28.767 [2024-07-12 11:43:31.972666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcac00 (9): Bad file descriptor 00:19:28.767 [2024-07-12 11:43:31.972682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.767 [2024-07-12 11:43:31.972803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:28.767 [2024-07-12 11:43:31.972816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.767 [2024-07-12 11:43:31.972946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:28.767 [2024-07-12 11:43:31.973035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.767 11:43:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82577 00:19:30.687 [2024-07-12 11:43:33.973233] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.687 [2024-07-12 11:43:33.973304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcac00 with addr=10.0.0.2, port=4420 00:19:30.687 [2024-07-12 11:43:33.973321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcac00 is same with the state(5) to be set 00:19:30.687 [2024-07-12 11:43:33.973349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcac00 (9): Bad file descriptor 00:19:30.687 [2024-07-12 11:43:33.973368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.687 [2024-07-12 11:43:33.973377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.687 [2024-07-12 11:43:33.973389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.687 [2024-07-12 11:43:33.973417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.687 [2024-07-12 11:43:33.973429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:32.619 [2024-07-12 11:43:35.973673] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:32.619 [2024-07-12 11:43:35.973741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcac00 with addr=10.0.0.2, port=4420 00:19:32.619 [2024-07-12 11:43:35.973758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcac00 is same with the state(5) to be set 00:19:32.619 [2024-07-12 11:43:35.973799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcac00 (9): Bad file descriptor 00:19:32.619 [2024-07-12 11:43:35.973820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:32.619 [2024-07-12 11:43:35.973831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:32.619 [2024-07-12 11:43:35.973841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:32.619 [2024-07-12 11:43:35.973869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:32.619 [2024-07-12 11:43:35.973881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.149 [2024-07-12 11:43:37.974001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.149 [2024-07-12 11:43:37.974059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.149 [2024-07-12 11:43:37.974072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.149 [2024-07-12 11:43:37.974083] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:35.150 [2024-07-12 11:43:37.974111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.716 00:19:35.716 Latency(us) 00:19:35.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.716 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:35.716 NVMe0n1 : 8.18 2095.44 8.19 15.64 0.00 60679.28 8162.21 7046430.72 00:19:35.716 =================================================================================================================== 00:19:35.716 Total : 2095.44 8.19 15.64 0.00 60679.28 8162.21 7046430.72 00:19:35.716 0 00:19:35.716 11:43:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:35.716 Attaching 5 probes... 00:19:35.716 1255.707502: reset bdev controller NVMe0 00:19:35.716 1255.786964: reconnect bdev controller NVMe0 00:19:35.716 3256.504403: reconnect delay bdev controller NVMe0 00:19:35.716 3256.529379: reconnect bdev controller NVMe0 00:19:35.716 5256.906085: reconnect delay bdev controller NVMe0 00:19:35.716 5256.929323: reconnect bdev controller NVMe0 00:19:35.716 7257.361940: reconnect delay bdev controller NVMe0 00:19:35.716 7257.399255: reconnect bdev controller NVMe0 00:19:35.716 11:43:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82530 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82518 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82518 ']' 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82518 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82518 00:19:35.716 killing process with pid 82518 00:19:35.716 Received shutdown signal, test time was about 8.234908 seconds 00:19:35.716 00:19:35.716 Latency(us) 00:19:35.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.716 =================================================================================================================== 00:19:35.716 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82518' 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82518 00:19:35.716 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82518 00:19:35.974 11:43:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.232 rmmod nvme_tcp 00:19:36.232 rmmod nvme_fabrics 00:19:36.232 rmmod nvme_keyring 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82082 ']' 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82082 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82082 ']' 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82082 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82082 00:19:36.232 killing process with pid 82082 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82082' 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82082 00:19:36.232 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82082 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:36.490 00:19:36.490 real 0m47.179s 00:19:36.490 user 2m18.683s 00:19:36.490 sys 0m5.573s 00:19:36.490 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:36.490 ************************************ 00:19:36.490 END TEST nvmf_timeout 00:19:36.490 ************************************ 00:19:36.491 11:43:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:36.491 11:43:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:36.491 11:43:39 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:19:36.491 11:43:39 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:19:36.491 11:43:39 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.491 11:43:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 11:43:39 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:19:36.749 ************************************ 00:19:36.749 END TEST nvmf_tcp 00:19:36.749 ************************************ 00:19:36.749 00:19:36.749 real 12m15.532s 00:19:36.749 user 29m53.508s 00:19:36.749 sys 3m1.012s 00:19:36.749 11:43:39 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:36.749 11:43:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 11:43:39 -- common/autotest_common.sh@1142 -- # return 0 00:19:36.749 11:43:39 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:19:36.750 11:43:39 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:36.750 11:43:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:36.750 11:43:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:36.750 11:43:39 -- common/autotest_common.sh@10 -- # set +x 00:19:36.750 ************************************ 00:19:36.750 START TEST nvmf_dif 00:19:36.750 ************************************ 00:19:36.750 11:43:39 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:36.750 * Looking for test storage... 00:19:36.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:36.750 11:43:40 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:36.750 11:43:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.750 11:43:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.750 11:43:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.750 11:43:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.750 11:43:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.750 11:43:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.750 11:43:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:36.750 11:43:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:36.750 11:43:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:36.750 11:43:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:36.750 11:43:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:36.750 11:43:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:36.750 11:43:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.750 11:43:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:36.750 11:43:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:36.750 Cannot find device "nvmf_tgt_br" 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:36.750 Cannot find device "nvmf_tgt_br2" 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:36.750 Cannot find device "nvmf_tgt_br" 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:36.750 Cannot find device "nvmf_tgt_br2" 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:36.750 11:43:40 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:37.008 11:43:40 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:37.008 11:43:40 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.008 11:43:40 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:37.008 11:43:40 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.008 11:43:40 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:37.008 11:43:40 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.008 11:43:40 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.008 11:43:40 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:37.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:19:37.009 00:19:37.009 --- 10.0.0.2 ping statistics --- 00:19:37.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.009 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:37.009 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.009 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:19:37.009 00:19:37.009 --- 10.0.0.3 ping statistics --- 00:19:37.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.009 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:37.009 11:43:40 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:37.009 00:19:37.009 --- 10.0.0.1 ping statistics --- 00:19:37.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.009 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:37.266 11:43:40 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.266 11:43:40 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:37.266 11:43:40 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:37.266 11:43:40 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:37.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:37.524 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.524 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:37.524 11:43:40 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:37.524 11:43:40 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:37.524 11:43:40 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:37.524 11:43:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83007 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:37.524 11:43:40 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83007 00:19:37.524 11:43:40 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83007 ']' 00:19:37.524 11:43:40 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.524 11:43:40 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.524 11:43:40 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.524 11:43:40 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.524 11:43:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:37.524 [2024-07-12 11:43:40.920447] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:19:37.524 [2024-07-12 11:43:40.920548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.782 [2024-07-12 11:43:41.058731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.782 [2024-07-12 11:43:41.169957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.782 [2024-07-12 11:43:41.170039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.782 [2024-07-12 11:43:41.170066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.782 [2024-07-12 11:43:41.170074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.782 [2024-07-12 11:43:41.170081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.782 [2024-07-12 11:43:41.170110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.782 [2024-07-12 11:43:41.226176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:19:38.715 11:43:41 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.715 11:43:41 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.715 11:43:41 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:38.715 11:43:41 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.715 [2024-07-12 11:43:41.937552] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.715 11:43:41 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:38.715 11:43:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:38.715 ************************************ 00:19:38.715 START TEST fio_dif_1_default 00:19:38.715 ************************************ 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:38.715 bdev_null0 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:38.715 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:38.716 [2024-07-12 11:43:41.981663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:38.716 { 00:19:38.716 "params": { 00:19:38.716 "name": "Nvme$subsystem", 00:19:38.716 "trtype": "$TEST_TRANSPORT", 00:19:38.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:38.716 "adrfam": "ipv4", 00:19:38.716 "trsvcid": "$NVMF_PORT", 00:19:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:38.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:38.716 "hdgst": ${hdgst:-false}, 00:19:38.716 "ddgst": ${ddgst:-false} 00:19:38.716 }, 00:19:38.716 "method": "bdev_nvme_attach_controller" 00:19:38.716 } 00:19:38.716 EOF 00:19:38.716 )") 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:38.716 "params": { 00:19:38.716 "name": "Nvme0", 00:19:38.716 "trtype": "tcp", 00:19:38.716 "traddr": "10.0.0.2", 00:19:38.716 "adrfam": "ipv4", 00:19:38.716 "trsvcid": "4420", 00:19:38.716 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:38.716 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:38.716 "hdgst": false, 00:19:38.716 "ddgst": false 00:19:38.716 }, 00:19:38.716 "method": "bdev_nvme_attach_controller" 00:19:38.716 }' 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:38.716 11:43:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:38.716 11:43:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:38.974 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:38.974 fio-3.35 00:19:38.974 Starting 1 thread 00:19:51.194 00:19:51.195 filename0: (groupid=0, jobs=1): err= 0: pid=83074: Fri Jul 12 11:43:52 2024 00:19:51.195 read: IOPS=8665, BW=33.9MiB/s (35.5MB/s)(339MiB/10001msec) 00:19:51.195 slat (nsec): min=5216, max=63785, avg=8630.80, stdev=2826.79 00:19:51.195 clat (usec): min=367, max=4474, avg=436.38, stdev=37.47 00:19:51.195 lat (usec): min=374, max=4529, avg=445.01, stdev=38.01 00:19:51.195 clat percentiles (usec): 00:19:51.195 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 416], 00:19:51.195 | 30.00th=[ 424], 40.00th=[ 429], 50.00th=[ 437], 60.00th=[ 441], 00:19:51.195 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 465], 95.00th=[ 474], 00:19:51.195 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 570], 99.95th=[ 619], 00:19:51.195 | 99.99th=[ 1254] 00:19:51.195 bw ( KiB/s): min=32576, max=35168, per=100.00%, avg=34691.37, stdev=562.70, samples=19 00:19:51.195 iops : min= 8144, max= 8792, avg=8672.84, stdev=140.68, samples=19 00:19:51.195 lat (usec) : 500=99.13%, 750=0.84%, 1000=0.01% 00:19:51.195 lat (msec) : 2=0.01%, 10=0.01% 00:19:51.195 cpu : usr=85.76%, sys=12.45%, ctx=16, majf=0, minf=0 00:19:51.195 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.195 issued rwts: total=86668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.195 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:51.195 00:19:51.195 Run status group 0 (all jobs): 00:19:51.195 READ: bw=33.9MiB/s (35.5MB/s), 33.9MiB/s-33.9MiB/s (35.5MB/s-35.5MB/s), io=339MiB (355MB), run=10001-10001msec 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 00:19:51.195 real 0m11.005s 00:19:51.195 user 0m9.200s 00:19:51.195 sys 0m1.519s 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.195 11:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 ************************************ 00:19:51.195 END TEST fio_dif_1_default 00:19:51.195 ************************************ 00:19:51.195 11:43:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:51.195 11:43:52 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:51.195 11:43:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:51.195 11:43:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.195 11:43:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 ************************************ 00:19:51.195 START TEST fio_dif_1_multi_subsystems 00:19:51.195 ************************************ 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 bdev_null0 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 [2024-07-12 11:43:53.036522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 bdev_null1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.195 { 00:19:51.195 "params": { 00:19:51.195 "name": "Nvme$subsystem", 00:19:51.195 "trtype": "$TEST_TRANSPORT", 00:19:51.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.195 "adrfam": "ipv4", 00:19:51.195 "trsvcid": "$NVMF_PORT", 00:19:51.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.195 "hdgst": ${hdgst:-false}, 00:19:51.195 "ddgst": ${ddgst:-false} 00:19:51.195 }, 00:19:51.195 "method": "bdev_nvme_attach_controller" 00:19:51.195 } 00:19:51.195 EOF 00:19:51.195 )") 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.195 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:51.196 { 00:19:51.196 "params": { 00:19:51.196 "name": "Nvme$subsystem", 00:19:51.196 "trtype": "$TEST_TRANSPORT", 00:19:51.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.196 "adrfam": "ipv4", 00:19:51.196 "trsvcid": "$NVMF_PORT", 00:19:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.196 "hdgst": ${hdgst:-false}, 00:19:51.196 "ddgst": ${ddgst:-false} 00:19:51.196 }, 00:19:51.196 "method": "bdev_nvme_attach_controller" 00:19:51.196 } 00:19:51.196 EOF 00:19:51.196 )") 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:51.196 "params": { 00:19:51.196 "name": "Nvme0", 00:19:51.196 "trtype": "tcp", 00:19:51.196 "traddr": "10.0.0.2", 00:19:51.196 "adrfam": "ipv4", 00:19:51.196 "trsvcid": "4420", 00:19:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:51.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:51.196 "hdgst": false, 00:19:51.196 "ddgst": false 00:19:51.196 }, 00:19:51.196 "method": "bdev_nvme_attach_controller" 00:19:51.196 },{ 00:19:51.196 "params": { 00:19:51.196 "name": "Nvme1", 00:19:51.196 "trtype": "tcp", 00:19:51.196 "traddr": "10.0.0.2", 00:19:51.196 "adrfam": "ipv4", 00:19:51.196 "trsvcid": "4420", 00:19:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.196 "hdgst": false, 00:19:51.196 "ddgst": false 00:19:51.196 }, 00:19:51.196 "method": "bdev_nvme_attach_controller" 00:19:51.196 }' 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:51.196 11:43:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.196 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:51.196 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:51.196 fio-3.35 00:19:51.196 Starting 2 threads 00:20:01.168 00:20:01.168 filename0: (groupid=0, jobs=1): err= 0: pid=83236: Fri Jul 12 11:44:03 2024 00:20:01.168 read: IOPS=4547, BW=17.8MiB/s (18.6MB/s)(178MiB/10001msec) 00:20:01.168 slat (nsec): min=6779, max=75514, avg=17969.72, stdev=7541.56 00:20:01.168 clat (usec): min=451, max=1446, avg=831.78, stdev=59.13 00:20:01.168 lat (usec): min=458, max=1468, avg=849.75, stdev=63.50 00:20:01.168 clat percentiles (usec): 00:20:01.168 | 1.00th=[ 701], 5.00th=[ 734], 10.00th=[ 766], 20.00th=[ 783], 00:20:01.168 | 30.00th=[ 799], 40.00th=[ 816], 50.00th=[ 824], 60.00th=[ 840], 00:20:01.168 | 70.00th=[ 865], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 930], 00:20:01.168 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1020], 99.95th=[ 1237], 00:20:01.168 | 99.99th=[ 1418] 00:20:01.168 bw ( KiB/s): min=16800, max=19424, per=50.17%, avg=18251.79, stdev=1029.66, samples=19 00:20:01.168 iops : min= 4200, max= 4856, avg=4562.95, stdev=257.42, samples=19 00:20:01.168 lat (usec) : 500=0.01%, 750=7.18%, 1000=92.62% 00:20:01.168 lat (msec) : 2=0.19% 00:20:01.168 cpu : usr=91.63%, sys=6.94%, ctx=21, majf=0, minf=0 00:20:01.168 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.168 issued rwts: total=45476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.168 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:01.168 filename1: (groupid=0, jobs=1): err= 0: pid=83237: Fri Jul 12 11:44:03 2024 00:20:01.168 read: IOPS=4546, BW=17.8MiB/s (18.6MB/s)(178MiB/10001msec) 00:20:01.168 slat (usec): min=7, max=113, avg=18.02, stdev= 7.69 00:20:01.168 clat (usec): min=694, max=1453, avg=831.05, stdev=51.01 00:20:01.168 lat (usec): min=701, max=1478, avg=849.07, stdev=55.50 00:20:01.168 clat percentiles (usec): 00:20:01.168 | 1.00th=[ 750], 5.00th=[ 766], 10.00th=[ 775], 20.00th=[ 791], 00:20:01.168 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 824], 60.00th=[ 840], 00:20:01.168 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 922], 00:20:01.168 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 1012], 99.95th=[ 1287], 00:20:01.168 | 99.99th=[ 1401] 00:20:01.168 bw ( KiB/s): min=16800, max=19424, per=50.17%, avg=18250.11, stdev=1027.94, samples=19 00:20:01.168 iops : min= 4200, max= 4856, avg=4562.53, stdev=256.98, samples=19 00:20:01.168 lat (usec) : 750=1.36%, 1000=98.50% 00:20:01.168 lat (msec) : 2=0.13% 00:20:01.168 cpu : usr=91.55%, sys=7.08%, ctx=55, majf=0, minf=0 00:20:01.168 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.168 issued rwts: total=45472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.168 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:01.168 00:20:01.168 Run status group 0 (all jobs): 00:20:01.168 READ: bw=35.5MiB/s (37.2MB/s), 17.8MiB/s-17.8MiB/s (18.6MB/s-18.6MB/s), io=355MiB (373MB), run=10001-10001msec 00:20:01.168 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:01.168 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:01.168 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:01.168 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:01.168 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:01.168 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:01.168 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.168 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.169 00:20:01.169 real 0m11.130s 00:20:01.169 user 0m19.048s 00:20:01.169 sys 0m1.700s 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:01.169 11:44:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 ************************************ 00:20:01.169 END TEST fio_dif_1_multi_subsystems 00:20:01.169 ************************************ 00:20:01.169 11:44:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:01.169 11:44:04 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:01.169 11:44:04 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:01.169 11:44:04 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.169 11:44:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 ************************************ 00:20:01.169 START TEST fio_dif_rand_params 00:20:01.169 ************************************ 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 bdev_null0 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.169 [2024-07-12 11:44:04.216556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.169 { 00:20:01.169 "params": { 00:20:01.169 "name": "Nvme$subsystem", 00:20:01.169 "trtype": "$TEST_TRANSPORT", 00:20:01.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.169 "adrfam": "ipv4", 00:20:01.169 "trsvcid": "$NVMF_PORT", 00:20:01.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.169 "hdgst": ${hdgst:-false}, 00:20:01.169 "ddgst": ${ddgst:-false} 00:20:01.169 }, 00:20:01.169 "method": "bdev_nvme_attach_controller" 00:20:01.169 } 00:20:01.169 EOF 00:20:01.169 )") 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:01.169 "params": { 00:20:01.169 "name": "Nvme0", 00:20:01.169 "trtype": "tcp", 00:20:01.169 "traddr": "10.0.0.2", 00:20:01.169 "adrfam": "ipv4", 00:20:01.169 "trsvcid": "4420", 00:20:01.169 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.169 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:01.169 "hdgst": false, 00:20:01.169 "ddgst": false 00:20:01.169 }, 00:20:01.169 "method": "bdev_nvme_attach_controller" 00:20:01.169 }' 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.169 11:44:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.169 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:01.169 ... 00:20:01.169 fio-3.35 00:20:01.169 Starting 3 threads 00:20:07.732 00:20:07.732 filename0: (groupid=0, jobs=1): err= 0: pid=83388: Fri Jul 12 11:44:09 2024 00:20:07.732 read: IOPS=247, BW=31.0MiB/s (32.5MB/s)(155MiB/5003msec) 00:20:07.732 slat (nsec): min=7993, max=48360, avg=15721.45, stdev=5471.98 00:20:07.732 clat (usec): min=9667, max=22195, avg=12076.65, stdev=1198.43 00:20:07.732 lat (usec): min=9680, max=22215, avg=12092.37, stdev=1199.28 00:20:07.732 clat percentiles (usec): 00:20:07.732 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:07.732 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:20:07.732 | 70.00th=[11731], 80.00th=[11863], 90.00th=[14484], 95.00th=[14877], 00:20:07.732 | 99.00th=[15139], 99.50th=[16057], 99.90th=[22152], 99.95th=[22152], 00:20:07.732 | 99.99th=[22152] 00:20:07.732 bw ( KiB/s): min=26880, max=33024, per=33.29%, avg=31651.44, stdev=1947.27, samples=9 00:20:07.732 iops : min= 210, max= 258, avg=247.22, stdev=15.20, samples=9 00:20:07.732 lat (msec) : 10=0.24%, 20=99.52%, 50=0.24% 00:20:07.732 cpu : usr=92.62%, sys=6.82%, ctx=7, majf=0, minf=9 00:20:07.732 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.732 issued rwts: total=1239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.732 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:07.732 filename0: (groupid=0, jobs=1): err= 0: pid=83389: Fri Jul 12 11:44:09 2024 00:20:07.732 read: IOPS=247, BW=31.0MiB/s (32.5MB/s)(155MiB/5004msec) 00:20:07.732 slat (nsec): min=7717, max=53683, avg=12328.05, stdev=6968.91 00:20:07.732 clat (usec): min=11427, max=21835, avg=12082.81, stdev=1202.21 00:20:07.732 lat (usec): min=11435, max=21858, avg=12095.14, stdev=1202.91 00:20:07.732 clat percentiles (usec): 00:20:07.732 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:07.732 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:20:07.732 | 70.00th=[11863], 80.00th=[11863], 90.00th=[14484], 95.00th=[14877], 00:20:07.733 | 99.00th=[15139], 99.50th=[15401], 99.90th=[21890], 99.95th=[21890], 00:20:07.733 | 99.99th=[21890] 00:20:07.733 bw ( KiB/s): min=26880, max=33090, per=33.30%, avg=31661.00, stdev=1918.99, samples=10 00:20:07.733 iops : min= 210, max= 258, avg=247.20, stdev=14.91, samples=10 00:20:07.733 lat (msec) : 20=99.76%, 50=0.24% 00:20:07.733 cpu : usr=91.39%, sys=7.90%, ctx=11, majf=0, minf=9 00:20:07.733 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.733 issued rwts: total=1239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.733 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:07.733 filename0: (groupid=0, jobs=1): err= 0: pid=83390: Fri Jul 12 11:44:09 2024 00:20:07.733 read: IOPS=247, BW=31.0MiB/s (32.5MB/s)(155MiB/5003msec) 00:20:07.733 slat (nsec): min=7849, max=71348, avg=16334.84, stdev=5548.28 00:20:07.733 clat (usec): min=9664, max=22186, avg=12073.63, stdev=1199.37 00:20:07.733 lat (usec): min=9677, max=22205, avg=12089.97, stdev=1199.65 00:20:07.733 clat percentiles (usec): 00:20:07.733 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:07.733 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:20:07.733 | 70.00th=[11863], 80.00th=[11863], 90.00th=[14484], 95.00th=[15008], 00:20:07.733 | 99.00th=[15139], 99.50th=[16057], 99.90th=[22152], 99.95th=[22152], 00:20:07.733 | 99.99th=[22152] 00:20:07.733 bw ( KiB/s): min=26880, max=33024, per=33.30%, avg=31658.67, stdev=1949.64, samples=9 00:20:07.733 iops : min= 210, max= 258, avg=247.33, stdev=15.23, samples=9 00:20:07.733 lat (msec) : 10=0.24%, 20=99.52%, 50=0.24% 00:20:07.733 cpu : usr=92.28%, sys=7.12%, ctx=25, majf=0, minf=9 00:20:07.733 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:07.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:07.733 issued rwts: total=1239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:07.733 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:07.733 00:20:07.733 Run status group 0 (all jobs): 00:20:07.733 READ: bw=92.9MiB/s (97.4MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=465MiB (487MB), run=5003-5004msec 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 bdev_null0 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 [2024-07-12 11:44:10.200551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 bdev_null1 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 bdev_null2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:07.733 { 00:20:07.733 "params": { 00:20:07.733 "name": "Nvme$subsystem", 00:20:07.733 "trtype": "$TEST_TRANSPORT", 00:20:07.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.733 "adrfam": "ipv4", 00:20:07.733 "trsvcid": "$NVMF_PORT", 00:20:07.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.733 "hdgst": ${hdgst:-false}, 00:20:07.733 "ddgst": ${ddgst:-false} 00:20:07.733 }, 00:20:07.733 "method": "bdev_nvme_attach_controller" 00:20:07.733 } 00:20:07.733 EOF 00:20:07.733 )") 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:07.733 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:07.734 { 00:20:07.734 "params": { 00:20:07.734 "name": "Nvme$subsystem", 00:20:07.734 "trtype": "$TEST_TRANSPORT", 00:20:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.734 "adrfam": "ipv4", 00:20:07.734 "trsvcid": "$NVMF_PORT", 00:20:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.734 "hdgst": ${hdgst:-false}, 00:20:07.734 "ddgst": ${ddgst:-false} 00:20:07.734 }, 00:20:07.734 "method": "bdev_nvme_attach_controller" 00:20:07.734 } 00:20:07.734 EOF 00:20:07.734 )") 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:07.734 { 00:20:07.734 "params": { 00:20:07.734 "name": "Nvme$subsystem", 00:20:07.734 "trtype": "$TEST_TRANSPORT", 00:20:07.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.734 "adrfam": "ipv4", 00:20:07.734 "trsvcid": "$NVMF_PORT", 00:20:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.734 "hdgst": ${hdgst:-false}, 00:20:07.734 "ddgst": ${ddgst:-false} 00:20:07.734 }, 00:20:07.734 "method": "bdev_nvme_attach_controller" 00:20:07.734 } 00:20:07.734 EOF 00:20:07.734 )") 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:07.734 "params": { 00:20:07.734 "name": "Nvme0", 00:20:07.734 "trtype": "tcp", 00:20:07.734 "traddr": "10.0.0.2", 00:20:07.734 "adrfam": "ipv4", 00:20:07.734 "trsvcid": "4420", 00:20:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:07.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:07.734 "hdgst": false, 00:20:07.734 "ddgst": false 00:20:07.734 }, 00:20:07.734 "method": "bdev_nvme_attach_controller" 00:20:07.734 },{ 00:20:07.734 "params": { 00:20:07.734 "name": "Nvme1", 00:20:07.734 "trtype": "tcp", 00:20:07.734 "traddr": "10.0.0.2", 00:20:07.734 "adrfam": "ipv4", 00:20:07.734 "trsvcid": "4420", 00:20:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.734 "hdgst": false, 00:20:07.734 "ddgst": false 00:20:07.734 }, 00:20:07.734 "method": "bdev_nvme_attach_controller" 00:20:07.734 },{ 00:20:07.734 "params": { 00:20:07.734 "name": "Nvme2", 00:20:07.734 "trtype": "tcp", 00:20:07.734 "traddr": "10.0.0.2", 00:20:07.734 "adrfam": "ipv4", 00:20:07.734 "trsvcid": "4420", 00:20:07.734 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:07.734 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:07.734 "hdgst": false, 00:20:07.734 "ddgst": false 00:20:07.734 }, 00:20:07.734 "method": "bdev_nvme_attach_controller" 00:20:07.734 }' 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:07.734 11:44:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.734 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:07.734 ... 00:20:07.734 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:07.734 ... 00:20:07.734 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:07.734 ... 00:20:07.734 fio-3.35 00:20:07.734 Starting 24 threads 00:20:19.966 00:20:19.966 filename0: (groupid=0, jobs=1): err= 0: pid=83487: Fri Jul 12 11:44:21 2024 00:20:19.966 read: IOPS=164, BW=657KiB/s (672kB/s)(6592KiB/10041msec) 00:20:19.966 slat (usec): min=3, max=4109, avg=25.07, stdev=172.49 00:20:19.966 clat (msec): min=42, max=184, avg=97.32, stdev=24.12 00:20:19.966 lat (msec): min=42, max=184, avg=97.35, stdev=24.12 00:20:19.966 clat percentiles (msec): 00:20:19.966 | 1.00th=[ 61], 5.00th=[ 64], 10.00th=[ 70], 20.00th=[ 72], 00:20:19.966 | 30.00th=[ 81], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 107], 00:20:19.966 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 127], 95.00th=[ 142], 00:20:19.966 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 184], 99.95th=[ 184], 00:20:19.966 | 99.99th=[ 184] 00:20:19.966 bw ( KiB/s): min= 512, max= 880, per=3.72%, avg=652.65, stdev=109.07, samples=20 00:20:19.966 iops : min= 128, max= 220, avg=163.15, stdev=27.26, samples=20 00:20:19.966 lat (msec) : 50=0.36%, 100=51.94%, 250=47.69% 00:20:19.966 cpu : usr=37.00%, sys=1.66%, ctx=1282, majf=0, minf=9 00:20:19.966 IO depths : 1=0.1%, 2=3.9%, 4=16.1%, 8=65.7%, 16=14.2%, 32=0.0%, >=64=0.0% 00:20:19.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 complete : 0=0.0%, 4=92.0%, 8=4.5%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 issued rwts: total=1648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.966 filename0: (groupid=0, jobs=1): err= 0: pid=83488: Fri Jul 12 11:44:21 2024 00:20:19.966 read: IOPS=175, BW=702KiB/s (719kB/s)(7044KiB/10038msec) 00:20:19.966 slat (usec): min=3, max=8025, avg=44.76, stdev=342.34 00:20:19.966 clat (msec): min=41, max=183, avg=90.90, stdev=24.80 00:20:19.966 lat (msec): min=41, max=183, avg=90.94, stdev=24.79 00:20:19.966 clat percentiles (msec): 00:20:19.966 | 1.00th=[ 45], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 68], 00:20:19.966 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 86], 60.00th=[ 100], 00:20:19.966 | 70.00th=[ 107], 80.00th=[ 114], 90.00th=[ 122], 95.00th=[ 133], 00:20:19.966 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 184], 99.95th=[ 184], 00:20:19.966 | 99.99th=[ 184] 00:20:19.966 bw ( KiB/s): min= 400, max= 960, per=3.98%, avg=698.00, stdev=129.59, samples=20 00:20:19.966 iops : min= 100, max= 240, avg=174.50, stdev=32.40, samples=20 00:20:19.966 lat (msec) : 50=2.56%, 100=57.92%, 250=39.52% 00:20:19.966 cpu : usr=42.98%, sys=1.92%, ctx=1305, majf=0, minf=9 00:20:19.966 IO depths : 1=0.1%, 2=3.2%, 4=12.9%, 8=69.7%, 16=14.1%, 32=0.0%, >=64=0.0% 00:20:19.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 complete : 0=0.0%, 4=90.6%, 8=6.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 issued rwts: total=1761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.966 filename0: (groupid=0, jobs=1): err= 0: pid=83489: Fri Jul 12 11:44:21 2024 00:20:19.966 read: IOPS=186, BW=746KiB/s (764kB/s)(7492KiB/10040msec) 00:20:19.966 slat (usec): min=4, max=8036, avg=30.71, stdev=306.60 00:20:19.966 clat (msec): min=34, max=178, avg=85.45, stdev=26.03 00:20:19.966 lat (msec): min=34, max=178, avg=85.48, stdev=26.03 00:20:19.966 clat percentiles (msec): 00:20:19.966 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 64], 00:20:19.966 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 86], 00:20:19.966 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 127], 00:20:19.966 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 178], 99.95th=[ 178], 00:20:19.966 | 99.99th=[ 178] 00:20:19.966 bw ( KiB/s): min= 568, max= 1019, per=4.25%, avg=744.95, stdev=152.05, samples=20 00:20:19.966 iops : min= 142, max= 254, avg=186.20, stdev=37.94, samples=20 00:20:19.966 lat (msec) : 50=8.54%, 100=58.68%, 250=32.78% 00:20:19.966 cpu : usr=33.01%, sys=1.23%, ctx=939, majf=0, minf=9 00:20:19.966 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:19.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 complete : 0=0.0%, 4=88.3%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 issued rwts: total=1873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.966 filename0: (groupid=0, jobs=1): err= 0: pid=83490: Fri Jul 12 11:44:21 2024 00:20:19.966 read: IOPS=159, BW=639KiB/s (654kB/s)(6400KiB/10022msec) 00:20:19.966 slat (usec): min=3, max=8043, avg=33.34, stdev=360.13 00:20:19.966 clat (msec): min=49, max=182, avg=100.01, stdev=28.09 00:20:19.966 lat (msec): min=49, max=182, avg=100.04, stdev=28.08 00:20:19.966 clat percentiles (msec): 00:20:19.966 | 1.00th=[ 61], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 72], 00:20:19.966 | 30.00th=[ 79], 40.00th=[ 87], 50.00th=[ 100], 60.00th=[ 110], 00:20:19.966 | 70.00th=[ 115], 80.00th=[ 122], 90.00th=[ 144], 95.00th=[ 153], 00:20:19.966 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 184], 99.95th=[ 184], 00:20:19.966 | 99.99th=[ 184] 00:20:19.966 bw ( KiB/s): min= 384, max= 880, per=3.61%, avg=633.65, stdev=157.79, samples=20 00:20:19.966 iops : min= 96, max= 220, avg=158.40, stdev=39.44, samples=20 00:20:19.966 lat (msec) : 50=0.12%, 100=50.25%, 250=49.62% 00:20:19.966 cpu : usr=36.83%, sys=1.60%, ctx=1110, majf=0, minf=9 00:20:19.966 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:20:19.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.966 filename0: (groupid=0, jobs=1): err= 0: pid=83491: Fri Jul 12 11:44:21 2024 00:20:19.966 read: IOPS=173, BW=693KiB/s (710kB/s)(6972KiB/10062msec) 00:20:19.966 slat (usec): min=4, max=8033, avg=26.91, stdev=287.95 00:20:19.966 clat (msec): min=6, max=180, avg=91.98, stdev=28.18 00:20:19.966 lat (msec): min=6, max=180, avg=92.01, stdev=28.18 00:20:19.966 clat percentiles (msec): 00:20:19.966 | 1.00th=[ 9], 5.00th=[ 48], 10.00th=[ 64], 20.00th=[ 72], 00:20:19.966 | 30.00th=[ 73], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 106], 00:20:19.966 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 144], 00:20:19.966 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:20:19.966 | 99.99th=[ 180] 00:20:19.966 bw ( KiB/s): min= 512, max= 1280, per=3.96%, avg=693.10, stdev=176.78, samples=20 00:20:19.966 iops : min= 128, max= 320, avg=173.25, stdev=44.22, samples=20 00:20:19.966 lat (msec) : 10=2.75%, 50=2.81%, 100=52.61%, 250=41.82% 00:20:19.966 cpu : usr=33.00%, sys=1.54%, ctx=924, majf=0, minf=9 00:20:19.966 IO depths : 1=0.2%, 2=3.8%, 4=14.8%, 8=67.1%, 16=14.2%, 32=0.0%, >=64=0.0% 00:20:19.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 complete : 0=0.0%, 4=91.6%, 8=5.2%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.966 filename0: (groupid=0, jobs=1): err= 0: pid=83492: Fri Jul 12 11:44:21 2024 00:20:19.966 read: IOPS=180, BW=722KiB/s (739kB/s)(7248KiB/10039msec) 00:20:19.966 slat (usec): min=6, max=5024, avg=27.06, stdev=183.90 00:20:19.966 clat (msec): min=24, max=179, avg=88.36, stdev=26.68 00:20:19.966 lat (msec): min=24, max=179, avg=88.39, stdev=26.69 00:20:19.966 clat percentiles (msec): 00:20:19.966 | 1.00th=[ 35], 5.00th=[ 49], 10.00th=[ 59], 20.00th=[ 67], 00:20:19.966 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 96], 00:20:19.966 | 70.00th=[ 106], 80.00th=[ 114], 90.00th=[ 122], 95.00th=[ 132], 00:20:19.966 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 180], 99.95th=[ 180], 00:20:19.966 | 99.99th=[ 180] 00:20:19.966 bw ( KiB/s): min= 512, max= 1000, per=4.11%, avg=720.70, stdev=150.33, samples=20 00:20:19.966 iops : min= 128, max= 250, avg=180.15, stdev=37.55, samples=20 00:20:19.966 lat (msec) : 50=5.79%, 100=58.22%, 250=35.98% 00:20:19.966 cpu : usr=42.08%, sys=2.01%, ctx=1547, majf=0, minf=9 00:20:19.966 IO depths : 1=0.1%, 2=2.5%, 4=10.1%, 8=72.7%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:19.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 complete : 0=0.0%, 4=89.8%, 8=7.9%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 issued rwts: total=1812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.966 filename0: (groupid=0, jobs=1): err= 0: pid=83493: Fri Jul 12 11:44:21 2024 00:20:19.966 read: IOPS=190, BW=763KiB/s (781kB/s)(7648KiB/10025msec) 00:20:19.966 slat (usec): min=5, max=8058, avg=31.12, stdev=255.52 00:20:19.966 clat (msec): min=31, max=179, avg=83.73, stdev=26.29 00:20:19.966 lat (msec): min=31, max=179, avg=83.76, stdev=26.29 00:20:19.966 clat percentiles (msec): 00:20:19.966 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 64], 00:20:19.966 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 85], 00:20:19.966 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 125], 00:20:19.966 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 180], 00:20:19.966 | 99.99th=[ 180] 00:20:19.966 bw ( KiB/s): min= 560, max= 1024, per=4.33%, avg=758.40, stdev=155.75, samples=20 00:20:19.966 iops : min= 140, max= 256, avg=189.60, stdev=38.94, samples=20 00:20:19.966 lat (msec) : 50=9.47%, 100=60.93%, 250=29.60% 00:20:19.966 cpu : usr=41.88%, sys=1.86%, ctx=1355, majf=0, minf=9 00:20:19.966 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.7%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:19.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 complete : 0=0.0%, 4=87.8%, 8=11.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.966 filename0: (groupid=0, jobs=1): err= 0: pid=83494: Fri Jul 12 11:44:21 2024 00:20:19.966 read: IOPS=192, BW=772KiB/s (790kB/s)(7752KiB/10047msec) 00:20:19.966 slat (usec): min=3, max=4035, avg=23.45, stdev=179.15 00:20:19.966 clat (msec): min=23, max=178, avg=82.81, stdev=25.95 00:20:19.966 lat (msec): min=23, max=178, avg=82.83, stdev=25.95 00:20:19.966 clat percentiles (msec): 00:20:19.966 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 61], 00:20:19.966 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:20:19.966 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 126], 00:20:19.966 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 180], 00:20:19.966 | 99.99th=[ 180] 00:20:19.966 bw ( KiB/s): min= 560, max= 1000, per=4.38%, avg=768.65, stdev=149.98, samples=20 00:20:19.966 iops : min= 140, max= 250, avg=192.15, stdev=37.50, samples=20 00:20:19.966 lat (msec) : 50=10.32%, 100=61.87%, 250=27.81% 00:20:19.966 cpu : usr=37.27%, sys=1.52%, ctx=1087, majf=0, minf=9 00:20:19.966 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:19.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.966 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.966 filename1: (groupid=0, jobs=1): err= 0: pid=83495: Fri Jul 12 11:44:21 2024 00:20:19.966 read: IOPS=193, BW=774KiB/s (792kB/s)(7788KiB/10067msec) 00:20:19.966 slat (usec): min=4, max=12029, avg=27.54, stdev=339.57 00:20:19.966 clat (msec): min=3, max=191, avg=82.49, stdev=30.98 00:20:19.967 lat (msec): min=3, max=191, avg=82.52, stdev=30.98 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 5], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 62], 00:20:19.967 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 85], 00:20:19.967 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 132], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 192], 00:20:19.967 | 99.99th=[ 192] 00:20:19.967 bw ( KiB/s): min= 504, max= 1648, per=4.40%, avg=772.00, stdev=256.81, samples=20 00:20:19.967 iops : min= 126, max= 412, avg=192.95, stdev=64.19, samples=20 00:20:19.967 lat (msec) : 4=0.72%, 10=3.39%, 50=8.63%, 100=55.98%, 250=31.28% 00:20:19.967 cpu : usr=35.37%, sys=1.72%, ctx=978, majf=0, minf=9 00:20:19.967 IO depths : 1=0.2%, 2=1.4%, 4=4.9%, 8=78.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=88.6%, 8=10.3%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename1: (groupid=0, jobs=1): err= 0: pid=83496: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=194, BW=778KiB/s (797kB/s)(7820KiB/10047msec) 00:20:19.967 slat (usec): min=4, max=8026, avg=31.55, stdev=266.65 00:20:19.967 clat (msec): min=23, max=178, avg=81.92, stdev=26.60 00:20:19.967 lat (msec): min=23, max=178, avg=81.95, stdev=26.61 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 62], 00:20:19.967 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:20:19.967 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 124], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 180], 00:20:19.967 | 99.99th=[ 180] 00:20:19.967 bw ( KiB/s): min= 536, max= 1024, per=4.44%, avg=777.80, stdev=158.86, samples=20 00:20:19.967 iops : min= 134, max= 256, avg=194.45, stdev=39.72, samples=20 00:20:19.967 lat (msec) : 50=10.64%, 100=61.07%, 250=28.29% 00:20:19.967 cpu : usr=44.01%, sys=1.87%, ctx=1379, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename1: (groupid=0, jobs=1): err= 0: pid=83497: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=188, BW=756KiB/s (774kB/s)(7572KiB/10022msec) 00:20:19.967 slat (usec): min=4, max=8046, avg=28.53, stdev=275.22 00:20:19.967 clat (msec): min=25, max=180, avg=84.48, stdev=25.95 00:20:19.967 lat (msec): min=25, max=180, avg=84.51, stdev=25.96 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 62], 00:20:19.967 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:20:19.967 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 123], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:20:19.967 | 99.99th=[ 180] 00:20:19.967 bw ( KiB/s): min= 512, max= 1024, per=4.28%, avg=750.85, stdev=146.40, samples=20 00:20:19.967 iops : min= 128, max= 256, avg=187.70, stdev=36.60, samples=20 00:20:19.967 lat (msec) : 50=10.62%, 100=60.59%, 250=28.79% 00:20:19.967 cpu : usr=32.33%, sys=1.17%, ctx=926, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=87.8%, 8=11.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename1: (groupid=0, jobs=1): err= 0: pid=83498: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=177, BW=709KiB/s (726kB/s)(7124KiB/10050msec) 00:20:19.967 slat (usec): min=4, max=8038, avg=22.62, stdev=212.85 00:20:19.967 clat (msec): min=34, max=204, avg=90.01, stdev=27.00 00:20:19.967 lat (msec): min=34, max=204, avg=90.03, stdev=27.01 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 41], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 68], 00:20:19.967 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 97], 00:20:19.967 | 70.00th=[ 107], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 144], 00:20:19.967 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 205], 99.95th=[ 205], 00:20:19.967 | 99.99th=[ 205] 00:20:19.967 bw ( KiB/s): min= 512, max= 976, per=4.04%, avg=708.20, stdev=159.43, samples=20 00:20:19.967 iops : min= 128, max= 244, avg=177.05, stdev=39.86, samples=20 00:20:19.967 lat (msec) : 50=3.71%, 100=57.66%, 250=38.63% 00:20:19.967 cpu : usr=43.00%, sys=1.96%, ctx=1237, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=2.5%, 4=10.1%, 8=72.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=90.0%, 8=7.8%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename1: (groupid=0, jobs=1): err= 0: pid=83499: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=186, BW=745KiB/s (763kB/s)(7488KiB/10046msec) 00:20:19.967 slat (nsec): min=6116, max=50659, avg=18323.24, stdev=8775.72 00:20:19.967 clat (msec): min=31, max=176, avg=85.76, stdev=26.99 00:20:19.967 lat (msec): min=32, max=176, avg=85.78, stdev=26.99 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 64], 00:20:19.967 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 90], 00:20:19.967 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 132], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 178], 99.95th=[ 178], 00:20:19.967 | 99.99th=[ 178] 00:20:19.967 bw ( KiB/s): min= 560, max= 1024, per=4.24%, avg=742.25, stdev=148.38, samples=20 00:20:19.967 iops : min= 140, max= 256, avg=185.55, stdev=37.09, samples=20 00:20:19.967 lat (msec) : 50=7.69%, 100=59.88%, 250=32.43% 00:20:19.967 cpu : usr=35.83%, sys=1.70%, ctx=1158, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename1: (groupid=0, jobs=1): err= 0: pid=83500: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=191, BW=766KiB/s (784kB/s)(7680KiB/10029msec) 00:20:19.967 slat (usec): min=3, max=8030, avg=23.55, stdev=235.26 00:20:19.967 clat (msec): min=23, max=180, avg=83.44, stdev=26.31 00:20:19.967 lat (msec): min=23, max=180, avg=83.46, stdev=26.30 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 62], 00:20:19.967 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 85], 00:20:19.967 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 126], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 180], 00:20:19.967 | 99.99th=[ 180] 00:20:19.967 bw ( KiB/s): min= 512, max= 1040, per=4.34%, avg=761.60, stdev=156.00, samples=20 00:20:19.967 iops : min= 128, max= 260, avg=190.40, stdev=39.00, samples=20 00:20:19.967 lat (msec) : 50=9.06%, 100=61.88%, 250=29.06% 00:20:19.967 cpu : usr=36.04%, sys=1.60%, ctx=1040, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename1: (groupid=0, jobs=1): err= 0: pid=83501: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=185, BW=741KiB/s (759kB/s)(7432KiB/10031msec) 00:20:19.967 slat (usec): min=4, max=8027, avg=28.32, stdev=241.57 00:20:19.967 clat (msec): min=32, max=184, avg=86.21, stdev=25.56 00:20:19.967 lat (msec): min=32, max=184, avg=86.24, stdev=25.56 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 40], 5.00th=[ 49], 10.00th=[ 59], 20.00th=[ 65], 00:20:19.967 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 88], 00:20:19.967 | 70.00th=[ 104], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 128], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 184], 00:20:19.967 | 99.99th=[ 184] 00:20:19.967 bw ( KiB/s): min= 488, max= 1000, per=4.20%, avg=736.60, stdev=148.13, samples=20 00:20:19.967 iops : min= 122, max= 250, avg=184.15, stdev=37.03, samples=20 00:20:19.967 lat (msec) : 50=5.54%, 100=61.63%, 250=32.83% 00:20:19.967 cpu : usr=40.16%, sys=1.74%, ctx=1218, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=76.0%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=88.8%, 8=9.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename1: (groupid=0, jobs=1): err= 0: pid=83502: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=170, BW=680KiB/s (696kB/s)(6828KiB/10039msec) 00:20:19.967 slat (usec): min=4, max=5022, avg=25.55, stdev=155.54 00:20:19.967 clat (msec): min=48, max=181, avg=93.81, stdev=24.51 00:20:19.967 lat (msec): min=48, max=181, avg=93.83, stdev=24.51 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 58], 5.00th=[ 64], 10.00th=[ 66], 20.00th=[ 70], 00:20:19.967 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 104], 00:20:19.967 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 122], 95.00th=[ 140], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 182], 99.95th=[ 182], 00:20:19.967 | 99.99th=[ 182] 00:20:19.967 bw ( KiB/s): min= 440, max= 881, per=3.86%, avg=676.25, stdev=116.51, samples=20 00:20:19.967 iops : min= 110, max= 220, avg=169.05, stdev=29.11, samples=20 00:20:19.967 lat (msec) : 50=0.35%, 100=56.36%, 250=43.29% 00:20:19.967 cpu : usr=45.02%, sys=1.71%, ctx=1222, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=4.4%, 4=17.5%, 8=64.6%, 16=13.4%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=92.0%, 8=4.1%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename2: (groupid=0, jobs=1): err= 0: pid=83503: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=177, BW=710KiB/s (727kB/s)(7132KiB/10049msec) 00:20:19.967 slat (usec): min=4, max=8026, avg=26.72, stdev=243.39 00:20:19.967 clat (msec): min=37, max=201, avg=89.97, stdev=26.95 00:20:19.967 lat (msec): min=37, max=201, avg=90.00, stdev=26.95 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 68], 00:20:19.967 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 96], 00:20:19.967 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 123], 95.00th=[ 136], 00:20:19.967 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 201], 99.95th=[ 201], 00:20:19.967 | 99.99th=[ 201] 00:20:19.967 bw ( KiB/s): min= 512, max= 1024, per=4.03%, avg=706.80, stdev=153.25, samples=20 00:20:19.967 iops : min= 128, max= 256, avg=176.70, stdev=38.31, samples=20 00:20:19.967 lat (msec) : 50=6.00%, 100=56.37%, 250=37.63% 00:20:19.967 cpu : usr=40.30%, sys=1.66%, ctx=1212, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=2.8%, 4=11.2%, 8=71.6%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=90.2%, 8=7.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename2: (groupid=0, jobs=1): err= 0: pid=83504: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=194, BW=778KiB/s (797kB/s)(7788KiB/10008msec) 00:20:19.967 slat (usec): min=5, max=8037, avg=31.14, stdev=327.61 00:20:19.967 clat (msec): min=25, max=181, avg=82.09, stdev=26.70 00:20:19.967 lat (msec): min=25, max=181, avg=82.12, stdev=26.70 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:20:19.967 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:20:19.967 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 124], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 182], 00:20:19.967 | 99.99th=[ 182] 00:20:19.967 bw ( KiB/s): min= 560, max= 1000, per=4.44%, avg=777.68, stdev=154.17, samples=19 00:20:19.967 iops : min= 140, max= 250, avg=194.42, stdev=38.54, samples=19 00:20:19.967 lat (msec) : 50=11.76%, 100=60.55%, 250=27.68% 00:20:19.967 cpu : usr=32.32%, sys=1.13%, ctx=929, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename2: (groupid=0, jobs=1): err= 0: pid=83505: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=173, BW=693KiB/s (710kB/s)(6964KiB/10043msec) 00:20:19.967 slat (usec): min=4, max=8034, avg=20.41, stdev=192.35 00:20:19.967 clat (msec): min=27, max=216, avg=92.18, stdev=27.16 00:20:19.967 lat (msec): min=27, max=216, avg=92.20, stdev=27.16 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 71], 00:20:19.967 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 99], 00:20:19.967 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 130], 95.00th=[ 144], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 218], 99.95th=[ 218], 00:20:19.967 | 99.99th=[ 218] 00:20:19.967 bw ( KiB/s): min= 496, max= 968, per=3.94%, avg=690.00, stdev=135.34, samples=20 00:20:19.967 iops : min= 124, max= 242, avg=172.50, stdev=33.83, samples=20 00:20:19.967 lat (msec) : 50=5.57%, 100=56.63%, 250=37.79% 00:20:19.967 cpu : usr=32.43%, sys=1.50%, ctx=913, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=2.5%, 4=10.2%, 8=72.6%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=90.0%, 8=7.8%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename2: (groupid=0, jobs=1): err= 0: pid=83506: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=193, BW=774KiB/s (793kB/s)(7760KiB/10022msec) 00:20:19.967 slat (usec): min=3, max=8099, avg=51.86, stdev=466.76 00:20:19.967 clat (msec): min=23, max=187, avg=82.41, stdev=26.15 00:20:19.967 lat (msec): min=23, max=187, avg=82.46, stdev=26.14 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 62], 00:20:19.967 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:20:19.967 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 123], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 188], 99.95th=[ 188], 00:20:19.967 | 99.99th=[ 188] 00:20:19.967 bw ( KiB/s): min= 512, max= 1000, per=4.39%, avg=769.60, stdev=153.57, samples=20 00:20:19.967 iops : min= 128, max= 250, avg=192.40, stdev=38.39, samples=20 00:20:19.967 lat (msec) : 50=10.46%, 100=60.57%, 250=28.97% 00:20:19.967 cpu : usr=41.07%, sys=1.74%, ctx=1337, majf=0, minf=9 00:20:19.967 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:19.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.967 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.967 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.967 filename2: (groupid=0, jobs=1): err= 0: pid=83507: Fri Jul 12 11:44:21 2024 00:20:19.967 read: IOPS=194, BW=778KiB/s (796kB/s)(7788KiB/10016msec) 00:20:19.967 slat (usec): min=4, max=7033, avg=30.39, stdev=245.91 00:20:19.967 clat (msec): min=23, max=182, avg=82.15, stdev=27.15 00:20:19.967 lat (msec): min=23, max=182, avg=82.18, stdev=27.16 00:20:19.967 clat percentiles (msec): 00:20:19.967 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:20:19.967 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 83], 00:20:19.967 | 70.00th=[ 97], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 124], 00:20:19.967 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 184], 00:20:19.967 | 99.99th=[ 184] 00:20:19.967 bw ( KiB/s): min= 560, max= 1024, per=4.42%, avg=774.80, stdev=156.45, samples=20 00:20:19.967 iops : min= 140, max= 256, avg=193.70, stdev=39.11, samples=20 00:20:19.967 lat (msec) : 50=12.17%, 100=59.37%, 250=28.45% 00:20:19.967 cpu : usr=38.18%, sys=1.68%, ctx=1110, majf=0, minf=9 00:20:19.968 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:19.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.968 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.968 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.968 filename2: (groupid=0, jobs=1): err= 0: pid=83508: Fri Jul 12 11:44:21 2024 00:20:19.968 read: IOPS=162, BW=649KiB/s (665kB/s)(6528KiB/10057msec) 00:20:19.968 slat (usec): min=3, max=8030, avg=30.44, stdev=343.38 00:20:19.968 clat (msec): min=9, max=206, avg=98.35, stdev=31.63 00:20:19.968 lat (msec): min=9, max=206, avg=98.39, stdev=31.64 00:20:19.968 clat percentiles (msec): 00:20:19.968 | 1.00th=[ 12], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 72], 00:20:19.968 | 30.00th=[ 73], 40.00th=[ 86], 50.00th=[ 107], 60.00th=[ 109], 00:20:19.968 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 144], 95.00th=[ 157], 00:20:19.968 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 207], 99.95th=[ 207], 00:20:19.968 | 99.99th=[ 207] 00:20:19.968 bw ( KiB/s): min= 400, max= 1264, per=3.69%, avg=646.30, stdev=207.92, samples=20 00:20:19.968 iops : min= 100, max= 316, avg=161.55, stdev=51.96, samples=20 00:20:19.968 lat (msec) : 10=0.98%, 20=0.98%, 50=1.84%, 100=43.20%, 250=53.00% 00:20:19.968 cpu : usr=32.25%, sys=1.32%, ctx=945, majf=0, minf=9 00:20:19.968 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:20:19.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.968 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.968 issued rwts: total=1632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.968 filename2: (groupid=0, jobs=1): err= 0: pid=83509: Fri Jul 12 11:44:21 2024 00:20:19.968 read: IOPS=189, BW=757KiB/s (775kB/s)(7616KiB/10064msec) 00:20:19.968 slat (usec): min=4, max=8027, avg=20.61, stdev=183.78 00:20:19.968 clat (msec): min=18, max=179, avg=84.34, stdev=26.39 00:20:19.968 lat (msec): min=18, max=179, avg=84.36, stdev=26.39 00:20:19.968 clat percentiles (msec): 00:20:19.968 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 62], 00:20:19.968 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 87], 00:20:19.968 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 121], 00:20:19.968 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:20:19.968 | 99.99th=[ 180] 00:20:19.968 bw ( KiB/s): min= 528, max= 1136, per=4.31%, avg=755.20, stdev=176.79, samples=20 00:20:19.968 iops : min= 132, max= 284, avg=188.80, stdev=44.20, samples=20 00:20:19.968 lat (msec) : 20=0.84%, 50=9.56%, 100=57.98%, 250=31.62% 00:20:19.968 cpu : usr=33.14%, sys=1.33%, ctx=951, majf=0, minf=9 00:20:19.968 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:19.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.968 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.968 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.968 filename2: (groupid=0, jobs=1): err= 0: pid=83510: Fri Jul 12 11:44:21 2024 00:20:19.968 read: IOPS=198, BW=794KiB/s (813kB/s)(8000KiB/10072msec) 00:20:19.968 slat (usec): min=3, max=8027, avg=22.18, stdev=182.46 00:20:19.968 clat (usec): min=1547, max=169667, avg=80282.80, stdev=33883.39 00:20:19.968 lat (usec): min=1555, max=169702, avg=80304.98, stdev=33885.90 00:20:19.968 clat percentiles (usec): 00:20:19.968 | 1.00th=[ 1663], 5.00th=[ 3392], 10.00th=[ 45351], 20.00th=[ 61080], 00:20:19.968 | 30.00th=[ 67634], 40.00th=[ 71828], 50.00th=[ 79168], 60.00th=[ 84411], 00:20:19.968 | 70.00th=[101188], 80.00th=[108528], 90.00th=[120062], 95.00th=[130548], 00:20:19.968 | 99.00th=[156238], 99.50th=[168821], 99.90th=[168821], 99.95th=[168821], 00:20:19.968 | 99.99th=[168821] 00:20:19.968 bw ( KiB/s): min= 536, max= 2269, per=4.53%, avg=794.95, stdev=368.44, samples=20 00:20:19.968 iops : min= 134, max= 567, avg=198.70, stdev=92.07, samples=20 00:20:19.968 lat (msec) : 2=3.10%, 4=2.40%, 10=2.50%, 50=5.15%, 100=55.75% 00:20:19.968 lat (msec) : 250=31.10% 00:20:19.968 cpu : usr=36.00%, sys=1.91%, ctx=1183, majf=0, minf=0 00:20:19.968 IO depths : 1=0.5%, 2=2.3%, 4=7.5%, 8=74.9%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:19.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.968 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.968 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.968 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:19.968 00:20:19.968 Run status group 0 (all jobs): 00:20:19.968 READ: bw=17.1MiB/s (17.9MB/s), 639KiB/s-794KiB/s (654kB/s-813kB/s), io=172MiB (181MB), run=10008-10072msec 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 bdev_null0 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 [2024-07-12 11:44:21.623424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 bdev_null1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.968 { 00:20:19.968 "params": { 00:20:19.968 "name": "Nvme$subsystem", 00:20:19.968 "trtype": "$TEST_TRANSPORT", 00:20:19.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.968 "adrfam": "ipv4", 00:20:19.968 "trsvcid": "$NVMF_PORT", 00:20:19.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.968 "hdgst": ${hdgst:-false}, 00:20:19.968 "ddgst": ${ddgst:-false} 00:20:19.968 }, 00:20:19.968 "method": "bdev_nvme_attach_controller" 00:20:19.968 } 00:20:19.968 EOF 00:20:19.968 )") 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.968 { 00:20:19.968 "params": { 00:20:19.968 "name": "Nvme$subsystem", 00:20:19.968 "trtype": "$TEST_TRANSPORT", 00:20:19.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.968 "adrfam": "ipv4", 00:20:19.968 "trsvcid": "$NVMF_PORT", 00:20:19.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.968 "hdgst": ${hdgst:-false}, 00:20:19.968 "ddgst": ${ddgst:-false} 00:20:19.968 }, 00:20:19.968 "method": "bdev_nvme_attach_controller" 00:20:19.968 } 00:20:19.968 EOF 00:20:19.968 )") 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:19.968 "params": { 00:20:19.968 "name": "Nvme0", 00:20:19.968 "trtype": "tcp", 00:20:19.968 "traddr": "10.0.0.2", 00:20:19.968 "adrfam": "ipv4", 00:20:19.968 "trsvcid": "4420", 00:20:19.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.968 "hdgst": false, 00:20:19.968 "ddgst": false 00:20:19.968 }, 00:20:19.968 "method": "bdev_nvme_attach_controller" 00:20:19.968 },{ 00:20:19.968 "params": { 00:20:19.968 "name": "Nvme1", 00:20:19.968 "trtype": "tcp", 00:20:19.968 "traddr": "10.0.0.2", 00:20:19.968 "adrfam": "ipv4", 00:20:19.968 "trsvcid": "4420", 00:20:19.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.968 "hdgst": false, 00:20:19.968 "ddgst": false 00:20:19.968 }, 00:20:19.968 "method": "bdev_nvme_attach_controller" 00:20:19.968 }' 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.968 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.969 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:19.969 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.969 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.969 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.969 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:19.969 11:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.969 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:19.969 ... 00:20:19.969 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:19.969 ... 00:20:19.969 fio-3.35 00:20:19.969 Starting 4 threads 00:20:24.154 00:20:24.154 filename0: (groupid=0, jobs=1): err= 0: pid=83651: Fri Jul 12 11:44:27 2024 00:20:24.154 read: IOPS=1700, BW=13.3MiB/s (13.9MB/s)(66.4MiB/5001msec) 00:20:24.154 slat (nsec): min=7493, max=49449, avg=15049.83, stdev=4143.91 00:20:24.154 clat (usec): min=998, max=7051, avg=4653.03, stdev=933.40 00:20:24.154 lat (usec): min=1006, max=7066, avg=4668.08, stdev=932.98 00:20:24.154 clat percentiles (usec): 00:20:24.154 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3392], 20.00th=[ 3458], 00:20:24.154 | 30.00th=[ 3589], 40.00th=[ 4424], 50.00th=[ 5276], 60.00th=[ 5407], 00:20:24.154 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5538], 95.00th=[ 5604], 00:20:24.154 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 5997], 99.95th=[ 6128], 00:20:24.154 | 99.99th=[ 7046] 00:20:24.154 bw ( KiB/s): min=11520, max=16336, per=20.68%, avg=13299.56, stdev=2222.31, samples=9 00:20:24.154 iops : min= 1440, max= 2042, avg=1662.44, stdev=277.79, samples=9 00:20:24.154 lat (usec) : 1000=0.01% 00:20:24.154 lat (msec) : 2=0.09%, 4=35.67%, 10=64.22% 00:20:24.154 cpu : usr=91.56%, sys=7.50%, ctx=7, majf=0, minf=9 00:20:24.154 IO depths : 1=0.1%, 2=13.2%, 4=60.3%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.154 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.154 issued rwts: total=8503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:24.154 filename0: (groupid=0, jobs=1): err= 0: pid=83652: Fri Jul 12 11:44:27 2024 00:20:24.154 read: IOPS=2308, BW=18.0MiB/s (18.9MB/s)(90.2MiB/5003msec) 00:20:24.154 slat (nsec): min=3809, max=46558, avg=12056.92, stdev=4238.94 00:20:24.154 clat (usec): min=1030, max=7482, avg=3431.40, stdev=970.30 00:20:24.154 lat (usec): min=1038, max=7497, avg=3443.46, stdev=970.78 00:20:24.154 clat percentiles (usec): 00:20:24.154 | 1.00th=[ 1909], 5.00th=[ 1942], 10.00th=[ 1975], 20.00th=[ 2114], 00:20:24.154 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3490], 00:20:24.154 | 70.00th=[ 3654], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5211], 00:20:24.154 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 6325], 99.95th=[ 6456], 00:20:24.154 | 99.99th=[ 7046] 00:20:24.154 bw ( KiB/s): min=16016, max=20624, per=29.11%, avg=18718.22, stdev=2176.36, samples=9 00:20:24.154 iops : min= 2002, max= 2578, avg=2339.78, stdev=272.04, samples=9 00:20:24.154 lat (msec) : 2=12.19%, 4=63.21%, 10=24.60% 00:20:24.154 cpu : usr=91.44%, sys=7.52%, ctx=29, majf=0, minf=9 00:20:24.154 IO depths : 1=0.1%, 2=1.0%, 4=70.8%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.154 complete : 0=0.0%, 4=99.6%, 8=0.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.154 issued rwts: total=11551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:24.154 filename1: (groupid=0, jobs=1): err= 0: pid=83653: Fri Jul 12 11:44:27 2024 00:20:24.154 read: IOPS=1699, BW=13.3MiB/s (13.9MB/s)(66.4MiB/5002msec) 00:20:24.154 slat (nsec): min=6986, max=81543, avg=15275.43, stdev=4041.98 00:20:24.154 clat (usec): min=1925, max=7049, avg=4655.59, stdev=930.17 00:20:24.154 lat (usec): min=1939, max=7064, avg=4670.87, stdev=930.33 00:20:24.154 clat percentiles (usec): 00:20:24.154 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3392], 20.00th=[ 3458], 00:20:24.154 | 30.00th=[ 3589], 40.00th=[ 4424], 50.00th=[ 5276], 60.00th=[ 5407], 00:20:24.154 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5538], 95.00th=[ 5604], 00:20:24.154 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 6063], 99.95th=[ 6128], 00:20:24.154 | 99.99th=[ 7046] 00:20:24.154 bw ( KiB/s): min=11536, max=16336, per=20.68%, avg=13297.00, stdev=2242.66, samples=9 00:20:24.154 iops : min= 1442, max= 2042, avg=1662.11, stdev=280.34, samples=9 00:20:24.154 lat (msec) : 2=0.04%, 4=35.75%, 10=64.21% 00:20:24.154 cpu : usr=91.88%, sys=7.20%, ctx=614, majf=0, minf=10 00:20:24.154 IO depths : 1=0.1%, 2=13.2%, 4=60.3%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.154 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.154 issued rwts: total=8500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:24.154 filename1: (groupid=0, jobs=1): err= 0: pid=83654: Fri Jul 12 11:44:27 2024 00:20:24.154 read: IOPS=2332, BW=18.2MiB/s (19.1MB/s)(91.2MiB/5005msec) 00:20:24.154 slat (nsec): min=6295, max=44142, avg=10368.47, stdev=3411.80 00:20:24.154 clat (usec): min=713, max=10926, avg=3400.66, stdev=1014.45 00:20:24.154 lat (usec): min=726, max=10959, avg=3411.03, stdev=1014.26 00:20:24.154 clat percentiles (usec): 00:20:24.154 | 1.00th=[ 1205], 5.00th=[ 1958], 10.00th=[ 1991], 20.00th=[ 2073], 00:20:24.154 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3490], 00:20:24.154 | 70.00th=[ 3621], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5211], 00:20:24.154 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 6980], 99.95th=[10683], 00:20:24.154 | 99.99th=[10683] 00:20:24.154 bw ( KiB/s): min=15936, max=20512, per=29.48%, avg=18956.44, stdev=2121.63, samples=9 00:20:24.154 iops : min= 1992, max= 2564, avg=2369.56, stdev=265.20, samples=9 00:20:24.154 lat (usec) : 750=0.21%, 1000=0.63% 00:20:24.154 lat (msec) : 2=11.89%, 4=64.05%, 10=23.15%, 20=0.07% 00:20:24.154 cpu : usr=90.61%, sys=8.41%, ctx=10, majf=0, minf=0 00:20:24.154 IO depths : 1=0.1%, 2=0.3%, 4=71.1%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.154 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.154 issued rwts: total=11673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:24.154 00:20:24.154 Run status group 0 (all jobs): 00:20:24.154 READ: bw=62.8MiB/s (65.8MB/s), 13.3MiB/s-18.2MiB/s (13.9MB/s-19.1MB/s), io=314MiB (330MB), run=5001-5005msec 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.414 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 ************************************ 00:20:24.415 END TEST fio_dif_rand_params 00:20:24.415 ************************************ 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 00:20:24.415 real 0m23.518s 00:20:24.415 user 2m5.086s 00:20:24.415 sys 0m7.426s 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:24.415 11:44:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 11:44:27 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:24.415 11:44:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:24.415 11:44:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:24.415 11:44:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.415 11:44:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 ************************************ 00:20:24.415 START TEST fio_dif_digest 00:20:24.415 ************************************ 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 bdev_null0 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:24.415 [2024-07-12 11:44:27.791723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:24.415 { 00:20:24.415 "params": { 00:20:24.415 "name": "Nvme$subsystem", 00:20:24.415 "trtype": "$TEST_TRANSPORT", 00:20:24.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.415 "adrfam": "ipv4", 00:20:24.415 "trsvcid": "$NVMF_PORT", 00:20:24.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.415 "hdgst": ${hdgst:-false}, 00:20:24.415 "ddgst": ${ddgst:-false} 00:20:24.415 }, 00:20:24.415 "method": "bdev_nvme_attach_controller" 00:20:24.415 } 00:20:24.415 EOF 00:20:24.415 )") 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:24.415 "params": { 00:20:24.415 "name": "Nvme0", 00:20:24.415 "trtype": "tcp", 00:20:24.415 "traddr": "10.0.0.2", 00:20:24.415 "adrfam": "ipv4", 00:20:24.415 "trsvcid": "4420", 00:20:24.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:24.415 "hdgst": true, 00:20:24.415 "ddgst": true 00:20:24.415 }, 00:20:24.415 "method": "bdev_nvme_attach_controller" 00:20:24.415 }' 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:24.415 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.674 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:24.674 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:24.674 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:24.674 11:44:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.674 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:24.674 ... 00:20:24.674 fio-3.35 00:20:24.674 Starting 3 threads 00:20:36.924 00:20:36.924 filename0: (groupid=0, jobs=1): err= 0: pid=83759: Fri Jul 12 11:44:38 2024 00:20:36.924 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(275MiB/10001msec) 00:20:36.924 slat (nsec): min=6873, max=61320, avg=13497.75, stdev=7344.62 00:20:36.924 clat (usec): min=5312, max=16191, avg=13608.88, stdev=412.27 00:20:36.924 lat (usec): min=5319, max=16211, avg=13622.38, stdev=412.17 00:20:36.924 clat percentiles (usec): 00:20:36.924 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13435], 20.00th=[13435], 00:20:36.924 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:20:36.924 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:20:36.924 | 99.00th=[14484], 99.50th=[15008], 99.90th=[16188], 99.95th=[16188], 00:20:36.924 | 99.99th=[16188] 00:20:36.924 bw ( KiB/s): min=27648, max=28416, per=33.40%, avg=28173.47, stdev=366.77, samples=19 00:20:36.924 iops : min= 216, max= 222, avg=220.11, stdev= 2.87, samples=19 00:20:36.924 lat (msec) : 10=0.14%, 20=99.86% 00:20:36.924 cpu : usr=93.88%, sys=5.52%, ctx=18, majf=0, minf=0 00:20:36.924 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.924 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.924 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.924 filename0: (groupid=0, jobs=1): err= 0: pid=83760: Fri Jul 12 11:44:38 2024 00:20:36.924 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(275MiB/10003msec) 00:20:36.924 slat (nsec): min=7363, max=60114, avg=17284.59, stdev=8814.56 00:20:36.924 clat (usec): min=10167, max=21449, avg=13627.76, stdev=407.94 00:20:36.924 lat (usec): min=10176, max=21476, avg=13645.04, stdev=407.57 00:20:36.924 clat percentiles (usec): 00:20:36.924 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13435], 20.00th=[13435], 00:20:36.924 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:20:36.924 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:20:36.924 | 99.00th=[14484], 99.50th=[15795], 99.90th=[21365], 99.95th=[21365], 00:20:36.924 | 99.99th=[21365] 00:20:36.924 bw ( KiB/s): min=26933, max=29184, per=33.31%, avg=28095.42, stdev=525.21, samples=19 00:20:36.924 iops : min= 210, max= 228, avg=219.47, stdev= 4.15, samples=19 00:20:36.924 lat (msec) : 20=99.86%, 50=0.14% 00:20:36.924 cpu : usr=94.61%, sys=4.81%, ctx=30, majf=0, minf=0 00:20:36.924 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.924 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.924 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.924 filename0: (groupid=0, jobs=1): err= 0: pid=83761: Fri Jul 12 11:44:38 2024 00:20:36.924 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(275MiB/10002msec) 00:20:36.924 slat (nsec): min=7623, max=82562, avg=21795.90, stdev=12275.10 00:20:36.924 clat (usec): min=10086, max=20085, avg=13608.05, stdev=375.04 00:20:36.924 lat (usec): min=10111, max=20110, avg=13629.84, stdev=374.82 00:20:36.924 clat percentiles (usec): 00:20:36.924 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13304], 20.00th=[13435], 00:20:36.924 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13566], 60.00th=[13566], 00:20:36.924 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:20:36.924 | 99.00th=[14484], 99.50th=[15795], 99.90th=[20055], 99.95th=[20055], 00:20:36.924 | 99.99th=[20055] 00:20:36.924 bw ( KiB/s): min=26880, max=28416, per=33.31%, avg=28092.63, stdev=466.16, samples=19 00:20:36.924 iops : min= 210, max= 222, avg=219.47, stdev= 3.64, samples=19 00:20:36.924 lat (msec) : 20=99.86%, 50=0.14% 00:20:36.924 cpu : usr=93.28%, sys=6.05%, ctx=77, majf=0, minf=9 00:20:36.924 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.924 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.924 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.924 00:20:36.924 Run status group 0 (all jobs): 00:20:36.924 READ: bw=82.4MiB/s (86.4MB/s), 27.4MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=824MiB (864MB), run=10001-10003msec 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:36.924 ************************************ 00:20:36.924 END TEST fio_dif_digest 00:20:36.924 ************************************ 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.924 00:20:36.924 real 0m11.027s 00:20:36.924 user 0m28.857s 00:20:36.924 sys 0m1.906s 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:36.924 11:44:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:36.924 11:44:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:36.924 11:44:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:36.924 rmmod nvme_tcp 00:20:36.924 rmmod nvme_fabrics 00:20:36.924 rmmod nvme_keyring 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83007 ']' 00:20:36.924 11:44:38 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83007 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83007 ']' 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83007 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83007 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:36.924 killing process with pid 83007 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83007' 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83007 00:20:36.924 11:44:38 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83007 00:20:36.924 11:44:39 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:36.924 11:44:39 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:36.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:36.924 Waiting for block devices as requested 00:20:36.924 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:36.924 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:36.924 11:44:39 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:36.924 11:44:39 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:36.924 11:44:39 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.924 11:44:39 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:36.924 11:44:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.924 11:44:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:36.924 11:44:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.924 11:44:39 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:36.924 00:20:36.924 real 0m59.770s 00:20:36.924 user 3m49.446s 00:20:36.924 sys 0m17.832s 00:20:36.924 11:44:39 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:36.924 11:44:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:36.925 ************************************ 00:20:36.925 END TEST nvmf_dif 00:20:36.925 ************************************ 00:20:36.925 11:44:39 -- common/autotest_common.sh@1142 -- # return 0 00:20:36.925 11:44:39 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:36.925 11:44:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:36.925 11:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.925 11:44:39 -- common/autotest_common.sh@10 -- # set +x 00:20:36.925 ************************************ 00:20:36.925 START TEST nvmf_abort_qd_sizes 00:20:36.925 ************************************ 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:36.925 * Looking for test storage... 00:20:36.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:36.925 Cannot find device "nvmf_tgt_br" 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.925 Cannot find device "nvmf_tgt_br2" 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:36.925 Cannot find device "nvmf_tgt_br" 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:36.925 Cannot find device "nvmf_tgt_br2" 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:20:36.925 11:44:39 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:36.925 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:36.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:20:36.926 00:20:36.926 --- 10.0.0.2 ping statistics --- 00:20:36.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.926 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:36.926 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:36.926 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:20:36.926 00:20:36.926 --- 10.0.0.3 ping statistics --- 00:20:36.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.926 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:36.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:36.926 00:20:36.926 --- 10.0.0.1 ping statistics --- 00:20:36.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.926 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:36.926 11:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:37.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.754 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.754 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84355 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84355 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84355 ']' 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.754 11:44:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:38.013 [2024-07-12 11:44:41.221049] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:20:38.013 [2024-07-12 11:44:41.221126] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.013 [2024-07-12 11:44:41.356416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.272 [2024-07-12 11:44:41.467622] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.272 [2024-07-12 11:44:41.467678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.272 [2024-07-12 11:44:41.467689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.272 [2024-07-12 11:44:41.467698] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.272 [2024-07-12 11:44:41.467706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.272 [2024-07-12 11:44:41.467798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.272 [2024-07-12 11:44:41.468699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.272 [2024-07-12 11:44:41.468839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.272 [2024-07-12 11:44:41.468843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.272 [2024-07-12 11:44:41.522922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:38.838 11:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.838 11:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:20:38.838 11:44:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.838 11:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:38.839 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.098 11:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:39.098 ************************************ 00:20:39.098 START TEST spdk_target_abort 00:20:39.098 ************************************ 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:39.098 spdk_targetn1 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:39.098 [2024-07-12 11:44:42.376112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:39.098 [2024-07-12 11:44:42.404292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:39.098 11:44:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:42.384 Initializing NVMe Controllers 00:20:42.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:42.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:42.384 Initialization complete. Launching workers. 00:20:42.384 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10969, failed: 0 00:20:42.384 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1024, failed to submit 9945 00:20:42.384 success 841, unsuccess 183, failed 0 00:20:42.384 11:44:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:42.384 11:44:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:45.692 Initializing NVMe Controllers 00:20:45.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:45.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:45.692 Initialization complete. Launching workers. 00:20:45.692 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8964, failed: 0 00:20:45.692 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1171, failed to submit 7793 00:20:45.692 success 415, unsuccess 756, failed 0 00:20:45.692 11:44:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:45.692 11:44:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:48.973 Initializing NVMe Controllers 00:20:48.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:48.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:48.973 Initialization complete. Launching workers. 00:20:48.973 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32007, failed: 0 00:20:48.973 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2294, failed to submit 29713 00:20:48.973 success 479, unsuccess 1815, failed 0 00:20:48.973 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:48.973 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.973 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:48.973 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.973 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:48.973 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.973 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:49.539 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.539 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84355 00:20:49.539 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84355 ']' 00:20:49.539 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84355 00:20:49.539 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:20:49.539 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.539 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84355 00:20:49.539 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:49.539 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:49.540 killing process with pid 84355 00:20:49.540 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84355' 00:20:49.540 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84355 00:20:49.540 11:44:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84355 00:20:49.798 00:20:49.798 real 0m10.749s 00:20:49.798 user 0m43.529s 00:20:49.798 sys 0m2.276s 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:49.798 ************************************ 00:20:49.798 END TEST spdk_target_abort 00:20:49.798 ************************************ 00:20:49.798 11:44:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:49.798 11:44:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:49.798 11:44:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:49.798 11:44:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:49.798 11:44:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:49.798 ************************************ 00:20:49.798 START TEST kernel_target_abort 00:20:49.798 ************************************ 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:49.798 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:50.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:50.058 Waiting for block devices as requested 00:20:50.316 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:50.316 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:50.316 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:50.316 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:50.316 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:50.316 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:50.316 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:50.316 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:50.316 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:50.316 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:50.316 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:50.574 No valid GPT data, bailing 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:50.575 No valid GPT data, bailing 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:50.575 No valid GPT data, bailing 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:50.575 11:44:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:50.575 No valid GPT data, bailing 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 --hostid=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 -a 10.0.0.1 -t tcp -s 4420 00:20:50.834 00:20:50.834 Discovery Log Number of Records 2, Generation counter 2 00:20:50.834 =====Discovery Log Entry 0====== 00:20:50.834 trtype: tcp 00:20:50.834 adrfam: ipv4 00:20:50.834 subtype: current discovery subsystem 00:20:50.834 treq: not specified, sq flow control disable supported 00:20:50.834 portid: 1 00:20:50.834 trsvcid: 4420 00:20:50.834 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:50.834 traddr: 10.0.0.1 00:20:50.834 eflags: none 00:20:50.834 sectype: none 00:20:50.834 =====Discovery Log Entry 1====== 00:20:50.834 trtype: tcp 00:20:50.834 adrfam: ipv4 00:20:50.834 subtype: nvme subsystem 00:20:50.834 treq: not specified, sq flow control disable supported 00:20:50.834 portid: 1 00:20:50.834 trsvcid: 4420 00:20:50.834 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:50.834 traddr: 10.0.0.1 00:20:50.834 eflags: none 00:20:50.834 sectype: none 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:50.834 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:50.835 11:44:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:54.122 Initializing NVMe Controllers 00:20:54.122 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:54.122 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:54.122 Initialization complete. Launching workers. 00:20:54.122 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31447, failed: 0 00:20:54.122 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31447, failed to submit 0 00:20:54.122 success 0, unsuccess 31447, failed 0 00:20:54.122 11:44:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:54.122 11:44:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:57.406 Initializing NVMe Controllers 00:20:57.406 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:57.406 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:57.406 Initialization complete. Launching workers. 00:20:57.406 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67696, failed: 0 00:20:57.406 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28891, failed to submit 38805 00:20:57.406 success 0, unsuccess 28891, failed 0 00:20:57.406 11:45:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:57.406 11:45:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:00.744 Initializing NVMe Controllers 00:21:00.744 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:00.744 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:00.744 Initialization complete. Launching workers. 00:21:00.744 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71404, failed: 0 00:21:00.744 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17844, failed to submit 53560 00:21:00.744 success 0, unsuccess 17844, failed 0 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:00.744 11:45:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:01.004 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:03.536 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.536 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.536 00:21:03.536 real 0m13.637s 00:21:03.536 user 0m6.237s 00:21:03.536 sys 0m4.711s 00:21:03.536 11:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.536 11:45:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:03.536 ************************************ 00:21:03.536 END TEST kernel_target_abort 00:21:03.536 ************************************ 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:03.536 rmmod nvme_tcp 00:21:03.536 rmmod nvme_fabrics 00:21:03.536 rmmod nvme_keyring 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84355 ']' 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84355 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84355 ']' 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84355 00:21:03.536 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84355) - No such process 00:21:03.536 Process with pid 84355 is not found 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84355 is not found' 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:03.536 11:45:06 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:03.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:04.053 Waiting for block devices as requested 00:21:04.053 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:04.053 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:04.053 11:45:07 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:04.053 11:45:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:04.053 11:45:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:04.053 11:45:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:04.053 11:45:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.053 11:45:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:04.053 11:45:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.312 11:45:07 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:04.312 00:21:04.312 real 0m27.715s 00:21:04.312 user 0m50.976s 00:21:04.312 sys 0m8.343s 00:21:04.312 11:45:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:04.312 11:45:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:04.312 ************************************ 00:21:04.312 END TEST nvmf_abort_qd_sizes 00:21:04.312 ************************************ 00:21:04.312 11:45:07 -- common/autotest_common.sh@1142 -- # return 0 00:21:04.312 11:45:07 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:04.312 11:45:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:04.312 11:45:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:04.312 11:45:07 -- common/autotest_common.sh@10 -- # set +x 00:21:04.312 ************************************ 00:21:04.312 START TEST keyring_file 00:21:04.312 ************************************ 00:21:04.312 11:45:07 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:04.312 * Looking for test storage... 00:21:04.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:04.312 11:45:07 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:04.312 11:45:07 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.312 11:45:07 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:04.312 11:45:07 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.312 11:45:07 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.312 11:45:07 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.313 11:45:07 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.313 11:45:07 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.313 11:45:07 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.313 11:45:07 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:04.313 11:45:07 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:04.313 11:45:07 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:04.313 11:45:07 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:04.313 11:45:07 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:04.313 11:45:07 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:04.313 11:45:07 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:04.313 11:45:07 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xNLePpG8SB 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xNLePpG8SB 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xNLePpG8SB 00:21:04.313 11:45:07 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xNLePpG8SB 00:21:04.313 11:45:07 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.st7kf5VX7w 00:21:04.313 11:45:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:04.313 11:45:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:04.575 11:45:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.st7kf5VX7w 00:21:04.575 11:45:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.st7kf5VX7w 00:21:04.575 11:45:07 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.st7kf5VX7w 00:21:04.575 11:45:07 keyring_file -- keyring/file.sh@30 -- # tgtpid=85217 00:21:04.575 11:45:07 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85217 00:21:04.575 11:45:07 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85217 ']' 00:21:04.575 11:45:07 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.575 11:45:07 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.575 11:45:07 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.575 11:45:07 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.575 11:45:07 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.575 11:45:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:04.575 [2024-07-12 11:45:07.880896] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:21:04.575 [2024-07-12 11:45:07.881014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85217 ] 00:21:04.575 [2024-07-12 11:45:08.019146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.833 [2024-07-12 11:45:08.201455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.833 [2024-07-12 11:45:08.277203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:05.768 11:45:08 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:05.768 [2024-07-12 11:45:08.870089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.768 null0 00:21:05.768 [2024-07-12 11:45:08.901993] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.768 [2024-07-12 11:45:08.902385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:05.768 [2024-07-12 11:45:08.909997] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.768 11:45:08 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:05.768 [2024-07-12 11:45:08.922042] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:05.768 request: 00:21:05.768 { 00:21:05.768 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.768 "secure_channel": false, 00:21:05.768 "listen_address": { 00:21:05.768 "trtype": "tcp", 00:21:05.768 "traddr": "127.0.0.1", 00:21:05.768 "trsvcid": "4420" 00:21:05.768 }, 00:21:05.768 "method": "nvmf_subsystem_add_listener", 00:21:05.768 "req_id": 1 00:21:05.768 } 00:21:05.768 Got JSON-RPC error response 00:21:05.768 response: 00:21:05.768 { 00:21:05.768 "code": -32602, 00:21:05.768 "message": "Invalid parameters" 00:21:05.768 } 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:05.768 11:45:08 keyring_file -- keyring/file.sh@46 -- # bperfpid=85234 00:21:05.768 11:45:08 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:05.768 11:45:08 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85234 /var/tmp/bperf.sock 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85234 ']' 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:05.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.768 11:45:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:05.768 [2024-07-12 11:45:08.985009] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:21:05.768 [2024-07-12 11:45:08.985143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85234 ] 00:21:05.768 [2024-07-12 11:45:09.119687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.027 [2024-07-12 11:45:09.287126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.027 [2024-07-12 11:45:09.360913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:06.960 11:45:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.960 11:45:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:06.960 11:45:10 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xNLePpG8SB 00:21:06.960 11:45:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xNLePpG8SB 00:21:06.960 11:45:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.st7kf5VX7w 00:21:06.960 11:45:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.st7kf5VX7w 00:21:07.526 11:45:10 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:07.526 11:45:10 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:07.526 11:45:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.526 11:45:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.526 11:45:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.785 11:45:11 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.xNLePpG8SB == \/\t\m\p\/\t\m\p\.\x\N\L\e\P\p\G\8\S\B ]] 00:21:07.785 11:45:11 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:07.785 11:45:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:07.785 11:45:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.785 11:45:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.785 11:45:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:08.043 11:45:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.st7kf5VX7w == \/\t\m\p\/\t\m\p\.\s\t\7\k\f\5\V\X\7\w ]] 00:21:08.043 11:45:11 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:08.043 11:45:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:08.043 11:45:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.043 11:45:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.043 11:45:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.043 11:45:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:08.301 11:45:11 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:08.301 11:45:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:08.301 11:45:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:08.301 11:45:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.301 11:45:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.301 11:45:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:08.301 11:45:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.559 11:45:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:08.559 11:45:11 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:08.559 11:45:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:09.126 [2024-07-12 11:45:12.325757] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.126 nvme0n1 00:21:09.126 11:45:12 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:09.126 11:45:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:09.126 11:45:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:09.126 11:45:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:09.126 11:45:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.126 11:45:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:09.385 11:45:12 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:09.385 11:45:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:09.385 11:45:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:09.385 11:45:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:09.385 11:45:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:09.385 11:45:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.385 11:45:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:09.644 11:45:12 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:09.644 11:45:12 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:09.644 Running I/O for 1 seconds... 00:21:10.578 00:21:10.578 Latency(us) 00:21:10.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.578 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:10.578 nvme0n1 : 1.01 11608.35 45.35 0.00 0.00 10987.41 5838.66 23712.12 00:21:10.578 =================================================================================================================== 00:21:10.578 Total : 11608.35 45.35 0.00 0.00 10987.41 5838.66 23712.12 00:21:10.578 0 00:21:10.578 11:45:14 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:10.578 11:45:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:11.143 11:45:14 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:11.143 11:45:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.143 11:45:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:11.143 11:45:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:11.143 11:45:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.143 11:45:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.402 11:45:14 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:11.402 11:45:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:11.402 11:45:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.402 11:45:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:11.402 11:45:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.402 11:45:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.402 11:45:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:11.660 11:45:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:11.660 11:45:14 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.660 11:45:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:11.660 11:45:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.660 11:45:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:11.660 11:45:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:11.660 11:45:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:11.660 11:45:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:11.660 11:45:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.660 11:45:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.660 [2024-07-12 11:45:15.086446] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:11.660 [2024-07-12 11:45:15.087404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12434f0 (107): Transport endpoint is not connected 00:21:11.660 [2024-07-12 11:45:15.088393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12434f0 (9): Bad file descriptor 00:21:11.661 [2024-07-12 11:45:15.089390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:11.661 [2024-07-12 11:45:15.089408] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:11.661 [2024-07-12 11:45:15.089418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:11.661 request: 00:21:11.661 { 00:21:11.661 "name": "nvme0", 00:21:11.661 "trtype": "tcp", 00:21:11.661 "traddr": "127.0.0.1", 00:21:11.661 "adrfam": "ipv4", 00:21:11.661 "trsvcid": "4420", 00:21:11.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:11.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:11.661 "prchk_reftag": false, 00:21:11.661 "prchk_guard": false, 00:21:11.661 "hdgst": false, 00:21:11.661 "ddgst": false, 00:21:11.661 "psk": "key1", 00:21:11.661 "method": "bdev_nvme_attach_controller", 00:21:11.661 "req_id": 1 00:21:11.661 } 00:21:11.661 Got JSON-RPC error response 00:21:11.661 response: 00:21:11.661 { 00:21:11.661 "code": -5, 00:21:11.661 "message": "Input/output error" 00:21:11.661 } 00:21:11.918 11:45:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:11.918 11:45:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:11.918 11:45:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:11.918 11:45:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:11.918 11:45:15 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:11.918 11:45:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:11.918 11:45:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.918 11:45:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:11.918 11:45:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.918 11:45:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.176 11:45:15 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:12.176 11:45:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:12.176 11:45:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:12.176 11:45:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:12.176 11:45:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:12.176 11:45:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:12.176 11:45:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.435 11:45:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:12.435 11:45:15 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:12.435 11:45:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:12.694 11:45:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:12.694 11:45:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:12.952 11:45:16 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:12.952 11:45:16 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:12.952 11:45:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.211 11:45:16 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:13.211 11:45:16 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.xNLePpG8SB 00:21:13.211 11:45:16 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xNLePpG8SB 00:21:13.211 11:45:16 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:13.211 11:45:16 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xNLePpG8SB 00:21:13.211 11:45:16 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:13.211 11:45:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.211 11:45:16 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:13.211 11:45:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.211 11:45:16 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xNLePpG8SB 00:21:13.211 11:45:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xNLePpG8SB 00:21:13.469 [2024-07-12 11:45:16.706830] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xNLePpG8SB': 0100660 00:21:13.469 [2024-07-12 11:45:16.706881] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:13.469 request: 00:21:13.469 { 00:21:13.469 "name": "key0", 00:21:13.469 "path": "/tmp/tmp.xNLePpG8SB", 00:21:13.469 "method": "keyring_file_add_key", 00:21:13.469 "req_id": 1 00:21:13.469 } 00:21:13.469 Got JSON-RPC error response 00:21:13.469 response: 00:21:13.469 { 00:21:13.469 "code": -1, 00:21:13.469 "message": "Operation not permitted" 00:21:13.469 } 00:21:13.469 11:45:16 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:13.469 11:45:16 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:13.469 11:45:16 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:13.469 11:45:16 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:13.469 11:45:16 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.xNLePpG8SB 00:21:13.469 11:45:16 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xNLePpG8SB 00:21:13.469 11:45:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xNLePpG8SB 00:21:13.726 11:45:17 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.xNLePpG8SB 00:21:13.726 11:45:17 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:13.726 11:45:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:13.726 11:45:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:13.726 11:45:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.726 11:45:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.726 11:45:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:13.984 11:45:17 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:13.984 11:45:17 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.984 11:45:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:13.984 11:45:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.984 11:45:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:13.984 11:45:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.984 11:45:17 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:13.984 11:45:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:13.984 11:45:17 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.984 11:45:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:14.241 [2024-07-12 11:45:17.615052] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xNLePpG8SB': No such file or directory 00:21:14.241 [2024-07-12 11:45:17.615115] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:14.241 [2024-07-12 11:45:17.615148] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:14.241 [2024-07-12 11:45:17.615160] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:14.241 [2024-07-12 11:45:17.615172] bdev_nvme.c:6267:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:14.241 request: 00:21:14.241 { 00:21:14.241 "name": "nvme0", 00:21:14.241 "trtype": "tcp", 00:21:14.241 "traddr": "127.0.0.1", 00:21:14.241 "adrfam": "ipv4", 00:21:14.242 "trsvcid": "4420", 00:21:14.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.242 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:14.242 "prchk_reftag": false, 00:21:14.242 "prchk_guard": false, 00:21:14.242 "hdgst": false, 00:21:14.242 "ddgst": false, 00:21:14.242 "psk": "key0", 00:21:14.242 "method": "bdev_nvme_attach_controller", 00:21:14.242 "req_id": 1 00:21:14.242 } 00:21:14.242 Got JSON-RPC error response 00:21:14.242 response: 00:21:14.242 { 00:21:14.242 "code": -19, 00:21:14.242 "message": "No such device" 00:21:14.242 } 00:21:14.242 11:45:17 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:14.242 11:45:17 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.242 11:45:17 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.242 11:45:17 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.242 11:45:17 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:14.242 11:45:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:14.500 11:45:17 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:14.500 11:45:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:14.500 11:45:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:14.500 11:45:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:14.500 11:45:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:14.500 11:45:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:14.500 11:45:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.a9tRzmwyGl 00:21:14.500 11:45:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:14.500 11:45:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:14.500 11:45:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:14.500 11:45:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:14.500 11:45:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:14.500 11:45:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:14.500 11:45:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:14.500 11:45:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.a9tRzmwyGl 00:21:14.758 11:45:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.a9tRzmwyGl 00:21:14.758 11:45:17 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.a9tRzmwyGl 00:21:14.758 11:45:17 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.a9tRzmwyGl 00:21:14.758 11:45:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.a9tRzmwyGl 00:21:14.758 11:45:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:14.758 11:45:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:15.017 nvme0n1 00:21:15.017 11:45:18 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:15.017 11:45:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:15.017 11:45:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:15.275 11:45:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.275 11:45:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.275 11:45:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:15.275 11:45:18 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:15.275 11:45:18 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:15.275 11:45:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:15.533 11:45:18 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:15.533 11:45:18 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:15.533 11:45:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:15.533 11:45:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.533 11:45:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.825 11:45:19 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:15.825 11:45:19 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:15.825 11:45:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:15.825 11:45:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:15.825 11:45:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:15.825 11:45:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.825 11:45:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.084 11:45:19 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:16.084 11:45:19 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:16.084 11:45:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:16.342 11:45:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:16.342 11:45:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.342 11:45:19 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:16.601 11:45:20 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:16.601 11:45:20 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.a9tRzmwyGl 00:21:16.601 11:45:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.a9tRzmwyGl 00:21:16.859 11:45:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.st7kf5VX7w 00:21:16.859 11:45:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.st7kf5VX7w 00:21:17.117 11:45:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:17.117 11:45:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:17.374 nvme0n1 00:21:17.374 11:45:20 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:17.374 11:45:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:17.942 11:45:21 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:17.942 "subsystems": [ 00:21:17.942 { 00:21:17.942 "subsystem": "keyring", 00:21:17.942 "config": [ 00:21:17.942 { 00:21:17.942 "method": "keyring_file_add_key", 00:21:17.942 "params": { 00:21:17.942 "name": "key0", 00:21:17.942 "path": "/tmp/tmp.a9tRzmwyGl" 00:21:17.942 } 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "method": "keyring_file_add_key", 00:21:17.942 "params": { 00:21:17.942 "name": "key1", 00:21:17.942 "path": "/tmp/tmp.st7kf5VX7w" 00:21:17.942 } 00:21:17.942 } 00:21:17.942 ] 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "subsystem": "iobuf", 00:21:17.942 "config": [ 00:21:17.942 { 00:21:17.942 "method": "iobuf_set_options", 00:21:17.942 "params": { 00:21:17.942 "small_pool_count": 8192, 00:21:17.942 "large_pool_count": 1024, 00:21:17.942 "small_bufsize": 8192, 00:21:17.942 "large_bufsize": 135168 00:21:17.942 } 00:21:17.942 } 00:21:17.942 ] 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "subsystem": "sock", 00:21:17.942 "config": [ 00:21:17.942 { 00:21:17.942 "method": "sock_set_default_impl", 00:21:17.942 "params": { 00:21:17.942 "impl_name": "uring" 00:21:17.942 } 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "method": "sock_impl_set_options", 00:21:17.942 "params": { 00:21:17.942 "impl_name": "ssl", 00:21:17.942 "recv_buf_size": 4096, 00:21:17.942 "send_buf_size": 4096, 00:21:17.942 "enable_recv_pipe": true, 00:21:17.942 "enable_quickack": false, 00:21:17.942 "enable_placement_id": 0, 00:21:17.942 "enable_zerocopy_send_server": true, 00:21:17.942 "enable_zerocopy_send_client": false, 00:21:17.942 "zerocopy_threshold": 0, 00:21:17.942 "tls_version": 0, 00:21:17.942 "enable_ktls": false 00:21:17.942 } 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "method": "sock_impl_set_options", 00:21:17.942 "params": { 00:21:17.942 "impl_name": "posix", 00:21:17.942 "recv_buf_size": 2097152, 00:21:17.942 "send_buf_size": 2097152, 00:21:17.942 "enable_recv_pipe": true, 00:21:17.942 "enable_quickack": false, 00:21:17.942 "enable_placement_id": 0, 00:21:17.942 "enable_zerocopy_send_server": true, 00:21:17.942 "enable_zerocopy_send_client": false, 00:21:17.942 "zerocopy_threshold": 0, 00:21:17.942 "tls_version": 0, 00:21:17.942 "enable_ktls": false 00:21:17.942 } 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "method": "sock_impl_set_options", 00:21:17.942 "params": { 00:21:17.942 "impl_name": "uring", 00:21:17.942 "recv_buf_size": 2097152, 00:21:17.942 "send_buf_size": 2097152, 00:21:17.942 "enable_recv_pipe": true, 00:21:17.942 "enable_quickack": false, 00:21:17.942 "enable_placement_id": 0, 00:21:17.942 "enable_zerocopy_send_server": false, 00:21:17.942 "enable_zerocopy_send_client": false, 00:21:17.942 "zerocopy_threshold": 0, 00:21:17.942 "tls_version": 0, 00:21:17.942 "enable_ktls": false 00:21:17.942 } 00:21:17.942 } 00:21:17.942 ] 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "subsystem": "vmd", 00:21:17.942 "config": [] 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "subsystem": "accel", 00:21:17.942 "config": [ 00:21:17.942 { 00:21:17.942 "method": "accel_set_options", 00:21:17.942 "params": { 00:21:17.942 "small_cache_size": 128, 00:21:17.942 "large_cache_size": 16, 00:21:17.942 "task_count": 2048, 00:21:17.942 "sequence_count": 2048, 00:21:17.942 "buf_count": 2048 00:21:17.942 } 00:21:17.942 } 00:21:17.942 ] 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "subsystem": "bdev", 00:21:17.942 "config": [ 00:21:17.942 { 00:21:17.942 "method": "bdev_set_options", 00:21:17.942 "params": { 00:21:17.942 "bdev_io_pool_size": 65535, 00:21:17.942 "bdev_io_cache_size": 256, 00:21:17.942 "bdev_auto_examine": true, 00:21:17.942 "iobuf_small_cache_size": 128, 00:21:17.942 "iobuf_large_cache_size": 16 00:21:17.942 } 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "method": "bdev_raid_set_options", 00:21:17.942 "params": { 00:21:17.942 "process_window_size_kb": 1024 00:21:17.942 } 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "method": "bdev_iscsi_set_options", 00:21:17.942 "params": { 00:21:17.942 "timeout_sec": 30 00:21:17.942 } 00:21:17.942 }, 00:21:17.942 { 00:21:17.942 "method": "bdev_nvme_set_options", 00:21:17.942 "params": { 00:21:17.942 "action_on_timeout": "none", 00:21:17.942 "timeout_us": 0, 00:21:17.942 "timeout_admin_us": 0, 00:21:17.942 "keep_alive_timeout_ms": 10000, 00:21:17.942 "arbitration_burst": 0, 00:21:17.942 "low_priority_weight": 0, 00:21:17.942 "medium_priority_weight": 0, 00:21:17.942 "high_priority_weight": 0, 00:21:17.942 "nvme_adminq_poll_period_us": 10000, 00:21:17.942 "nvme_ioq_poll_period_us": 0, 00:21:17.942 "io_queue_requests": 512, 00:21:17.942 "delay_cmd_submit": true, 00:21:17.942 "transport_retry_count": 4, 00:21:17.942 "bdev_retry_count": 3, 00:21:17.942 "transport_ack_timeout": 0, 00:21:17.942 "ctrlr_loss_timeout_sec": 0, 00:21:17.942 "reconnect_delay_sec": 0, 00:21:17.942 "fast_io_fail_timeout_sec": 0, 00:21:17.942 "disable_auto_failback": false, 00:21:17.942 "generate_uuids": false, 00:21:17.942 "transport_tos": 0, 00:21:17.942 "nvme_error_stat": false, 00:21:17.942 "rdma_srq_size": 0, 00:21:17.942 "io_path_stat": false, 00:21:17.943 "allow_accel_sequence": false, 00:21:17.943 "rdma_max_cq_size": 0, 00:21:17.943 "rdma_cm_event_timeout_ms": 0, 00:21:17.943 "dhchap_digests": [ 00:21:17.943 "sha256", 00:21:17.943 "sha384", 00:21:17.943 "sha512" 00:21:17.943 ], 00:21:17.943 "dhchap_dhgroups": [ 00:21:17.943 "null", 00:21:17.943 "ffdhe2048", 00:21:17.943 "ffdhe3072", 00:21:17.943 "ffdhe4096", 00:21:17.943 "ffdhe6144", 00:21:17.943 "ffdhe8192" 00:21:17.943 ] 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "bdev_nvme_attach_controller", 00:21:17.943 "params": { 00:21:17.943 "name": "nvme0", 00:21:17.943 "trtype": "TCP", 00:21:17.943 "adrfam": "IPv4", 00:21:17.943 "traddr": "127.0.0.1", 00:21:17.943 "trsvcid": "4420", 00:21:17.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.943 "prchk_reftag": false, 00:21:17.943 "prchk_guard": false, 00:21:17.943 "ctrlr_loss_timeout_sec": 0, 00:21:17.943 "reconnect_delay_sec": 0, 00:21:17.943 "fast_io_fail_timeout_sec": 0, 00:21:17.943 "psk": "key0", 00:21:17.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:17.943 "hdgst": false, 00:21:17.943 "ddgst": false 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "bdev_nvme_set_hotplug", 00:21:17.943 "params": { 00:21:17.943 "period_us": 100000, 00:21:17.943 "enable": false 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "bdev_wait_for_examine" 00:21:17.943 } 00:21:17.943 ] 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "subsystem": "nbd", 00:21:17.943 "config": [] 00:21:17.943 } 00:21:17.943 ] 00:21:17.943 }' 00:21:17.943 11:45:21 keyring_file -- keyring/file.sh@114 -- # killprocess 85234 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85234 ']' 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85234 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85234 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:17.943 killing process with pid 85234 00:21:17.943 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.943 00:21:17.943 Latency(us) 00:21:17.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.943 =================================================================================================================== 00:21:17.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85234' 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@967 -- # kill 85234 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@972 -- # wait 85234 00:21:17.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:17.943 11:45:21 keyring_file -- keyring/file.sh@117 -- # bperfpid=85490 00:21:17.943 11:45:21 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85490 /var/tmp/bperf.sock 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85490 ']' 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.943 11:45:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:17.943 11:45:21 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:17.943 11:45:21 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:17.943 "subsystems": [ 00:21:17.943 { 00:21:17.943 "subsystem": "keyring", 00:21:17.943 "config": [ 00:21:17.943 { 00:21:17.943 "method": "keyring_file_add_key", 00:21:17.943 "params": { 00:21:17.943 "name": "key0", 00:21:17.943 "path": "/tmp/tmp.a9tRzmwyGl" 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "keyring_file_add_key", 00:21:17.943 "params": { 00:21:17.943 "name": "key1", 00:21:17.943 "path": "/tmp/tmp.st7kf5VX7w" 00:21:17.943 } 00:21:17.943 } 00:21:17.943 ] 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "subsystem": "iobuf", 00:21:17.943 "config": [ 00:21:17.943 { 00:21:17.943 "method": "iobuf_set_options", 00:21:17.943 "params": { 00:21:17.943 "small_pool_count": 8192, 00:21:17.943 "large_pool_count": 1024, 00:21:17.943 "small_bufsize": 8192, 00:21:17.943 "large_bufsize": 135168 00:21:17.943 } 00:21:17.943 } 00:21:17.943 ] 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "subsystem": "sock", 00:21:17.943 "config": [ 00:21:17.943 { 00:21:17.943 "method": "sock_set_default_impl", 00:21:17.943 "params": { 00:21:17.943 "impl_name": "uring" 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "sock_impl_set_options", 00:21:17.943 "params": { 00:21:17.943 "impl_name": "ssl", 00:21:17.943 "recv_buf_size": 4096, 00:21:17.943 "send_buf_size": 4096, 00:21:17.943 "enable_recv_pipe": true, 00:21:17.943 "enable_quickack": false, 00:21:17.943 "enable_placement_id": 0, 00:21:17.943 "enable_zerocopy_send_server": true, 00:21:17.943 "enable_zerocopy_send_client": false, 00:21:17.943 "zerocopy_threshold": 0, 00:21:17.943 "tls_version": 0, 00:21:17.943 "enable_ktls": false 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "sock_impl_set_options", 00:21:17.943 "params": { 00:21:17.943 "impl_name": "posix", 00:21:17.943 "recv_buf_size": 2097152, 00:21:17.943 "send_buf_size": 2097152, 00:21:17.943 "enable_recv_pipe": true, 00:21:17.943 "enable_quickack": false, 00:21:17.943 "enable_placement_id": 0, 00:21:17.943 "enable_zerocopy_send_server": true, 00:21:17.943 "enable_zerocopy_send_client": false, 00:21:17.943 "zerocopy_threshold": 0, 00:21:17.943 "tls_version": 0, 00:21:17.943 "enable_ktls": false 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "sock_impl_set_options", 00:21:17.943 "params": { 00:21:17.943 "impl_name": "uring", 00:21:17.943 "recv_buf_size": 2097152, 00:21:17.943 "send_buf_size": 2097152, 00:21:17.943 "enable_recv_pipe": true, 00:21:17.943 "enable_quickack": false, 00:21:17.943 "enable_placement_id": 0, 00:21:17.943 "enable_zerocopy_send_server": false, 00:21:17.943 "enable_zerocopy_send_client": false, 00:21:17.943 "zerocopy_threshold": 0, 00:21:17.943 "tls_version": 0, 00:21:17.943 "enable_ktls": false 00:21:17.943 } 00:21:17.943 } 00:21:17.943 ] 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "subsystem": "vmd", 00:21:17.943 "config": [] 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "subsystem": "accel", 00:21:17.943 "config": [ 00:21:17.943 { 00:21:17.943 "method": "accel_set_options", 00:21:17.943 "params": { 00:21:17.943 "small_cache_size": 128, 00:21:17.943 "large_cache_size": 16, 00:21:17.943 "task_count": 2048, 00:21:17.943 "sequence_count": 2048, 00:21:17.943 "buf_count": 2048 00:21:17.943 } 00:21:17.943 } 00:21:17.943 ] 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "subsystem": "bdev", 00:21:17.943 "config": [ 00:21:17.943 { 00:21:17.943 "method": "bdev_set_options", 00:21:17.943 "params": { 00:21:17.943 "bdev_io_pool_size": 65535, 00:21:17.943 "bdev_io_cache_size": 256, 00:21:17.943 "bdev_auto_examine": true, 00:21:17.943 "iobuf_small_cache_size": 128, 00:21:17.943 "iobuf_large_cache_size": 16 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "bdev_raid_set_options", 00:21:17.943 "params": { 00:21:17.943 "process_window_size_kb": 1024 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "bdev_iscsi_set_options", 00:21:17.943 "params": { 00:21:17.943 "timeout_sec": 30 00:21:17.943 } 00:21:17.943 }, 00:21:17.943 { 00:21:17.943 "method": "bdev_nvme_set_options", 00:21:17.943 "params": { 00:21:17.943 "action_on_timeout": "none", 00:21:17.943 "timeout_us": 0, 00:21:17.943 "timeout_admin_us": 0, 00:21:17.943 "keep_alive_timeout_ms": 10000, 00:21:17.943 "arbitration_burst": 0, 00:21:17.943 "low_priority_weight": 0, 00:21:17.943 "medium_priority_weight": 0, 00:21:17.943 "high_priority_weight": 0, 00:21:17.943 "nvme_adminq_poll_period_us": 10000, 00:21:17.943 "nvme_ioq_poll_period_us": 0, 00:21:17.944 "io_queue_requests": 512, 00:21:17.944 "delay_cmd_submit": true, 00:21:17.944 "transport_retry_count": 4, 00:21:17.944 "bdev_retry_count": 3, 00:21:17.944 "transport_ack_timeout": 0, 00:21:17.944 "ctrlr_loss_timeout_sec": 0, 00:21:17.944 "reconnect_delay_sec": 0, 00:21:17.944 "fast_io_fail_timeout_sec": 0, 00:21:17.944 "disable_auto_failback": false, 00:21:17.944 "generate_uuids": false, 00:21:17.944 "transport_tos": 0, 00:21:17.944 "nvme_error_stat": false, 00:21:17.944 "rdma_srq_size": 0, 00:21:17.944 "io_path_stat": false, 00:21:17.944 "allow_accel_sequence": false, 00:21:17.944 "rdma_max_cq_size": 0, 00:21:17.944 "rdma_cm_event_timeout_ms": 0, 00:21:17.944 "dhchap_digests": [ 00:21:17.944 "sha256", 00:21:17.944 "sha384", 00:21:17.944 "sha512" 00:21:17.944 ], 00:21:17.944 "dhchap_dhgroups": [ 00:21:17.944 "null", 00:21:17.944 "ffdhe2048", 00:21:17.944 "ffdhe3072", 00:21:17.944 "ffdhe4096", 00:21:17.944 "ffdhe6144", 00:21:17.944 "ffdhe8192" 00:21:17.944 ] 00:21:17.944 } 00:21:17.944 }, 00:21:17.944 { 00:21:17.944 "method": "bdev_nvme_attach_controller", 00:21:17.944 "params": { 00:21:17.944 "name": "nvme0", 00:21:17.944 "trtype": "TCP", 00:21:17.944 "adrfam": "IPv4", 00:21:17.944 "traddr": "127.0.0.1", 00:21:17.944 "trsvcid": "4420", 00:21:17.944 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.944 "prchk_reftag": false, 00:21:17.944 "prchk_guard": false, 00:21:17.944 "ctrlr_loss_timeout_sec": 0, 00:21:17.944 "reconnect_delay_sec": 0, 00:21:17.944 "fast_io_fail_timeout_sec": 0, 00:21:17.944 "psk": "key0", 00:21:17.944 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:17.944 "hdgst": false, 00:21:17.944 "ddgst": false 00:21:17.944 } 00:21:17.944 }, 00:21:17.944 { 00:21:17.944 "method": "bdev_nvme_set_hotplug", 00:21:17.944 "params": { 00:21:17.944 "period_us": 100000, 00:21:17.944 "enable": false 00:21:17.944 } 00:21:17.944 }, 00:21:17.944 { 00:21:17.944 "method": "bdev_wait_for_examine" 00:21:17.944 } 00:21:17.944 ] 00:21:17.944 }, 00:21:17.944 { 00:21:17.944 "subsystem": "nbd", 00:21:17.944 "config": [] 00:21:17.944 } 00:21:17.944 ] 00:21:17.944 }' 00:21:18.202 [2024-07-12 11:45:21.404833] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:21:18.202 [2024-07-12 11:45:21.404938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85490 ] 00:21:18.202 [2024-07-12 11:45:21.543914] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.459 [2024-07-12 11:45:21.658942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.459 [2024-07-12 11:45:21.795123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:18.459 [2024-07-12 11:45:21.851730] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.024 11:45:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.024 11:45:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:19.024 11:45:22 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:19.024 11:45:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:19.024 11:45:22 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:19.282 11:45:22 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:19.282 11:45:22 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:19.282 11:45:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:19.282 11:45:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:19.282 11:45:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:19.282 11:45:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:19.282 11:45:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:19.541 11:45:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:19.541 11:45:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:19.541 11:45:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:19.541 11:45:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:19.541 11:45:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:19.541 11:45:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:19.541 11:45:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:19.801 11:45:23 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:19.801 11:45:23 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:19.801 11:45:23 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:19.801 11:45:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:20.060 11:45:23 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:20.060 11:45:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:20.060 11:45:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.a9tRzmwyGl /tmp/tmp.st7kf5VX7w 00:21:20.060 11:45:23 keyring_file -- keyring/file.sh@20 -- # killprocess 85490 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85490 ']' 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85490 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85490 00:21:20.060 killing process with pid 85490 00:21:20.060 Received shutdown signal, test time was about 1.000000 seconds 00:21:20.060 00:21:20.060 Latency(us) 00:21:20.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.060 =================================================================================================================== 00:21:20.060 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85490' 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@967 -- # kill 85490 00:21:20.060 11:45:23 keyring_file -- common/autotest_common.sh@972 -- # wait 85490 00:21:20.317 11:45:23 keyring_file -- keyring/file.sh@21 -- # killprocess 85217 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85217 ']' 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85217 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85217 00:21:20.317 killing process with pid 85217 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85217' 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@967 -- # kill 85217 00:21:20.317 [2024-07-12 11:45:23.738201] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:20.317 11:45:23 keyring_file -- common/autotest_common.sh@972 -- # wait 85217 00:21:20.885 ************************************ 00:21:20.885 END TEST keyring_file 00:21:20.885 ************************************ 00:21:20.885 00:21:20.885 real 0m16.563s 00:21:20.885 user 0m41.335s 00:21:20.885 sys 0m3.277s 00:21:20.885 11:45:24 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:20.885 11:45:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:20.885 11:45:24 -- common/autotest_common.sh@1142 -- # return 0 00:21:20.885 11:45:24 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:21:20.885 11:45:24 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:20.885 11:45:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:20.885 11:45:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.885 11:45:24 -- common/autotest_common.sh@10 -- # set +x 00:21:20.885 ************************************ 00:21:20.885 START TEST keyring_linux 00:21:20.885 ************************************ 00:21:20.885 11:45:24 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:20.885 * Looking for test storage... 00:21:20.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:20.885 11:45:24 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:20.885 11:45:24 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=7ab67270-3ac4-4f3c-984e-f75d1bf196c0 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:20.885 11:45:24 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.885 11:45:24 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.885 11:45:24 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.885 11:45:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.885 11:45:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.885 11:45:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.885 11:45:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:20.885 11:45:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.885 11:45:24 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.885 11:45:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:20.885 11:45:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:20.885 11:45:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:20.885 11:45:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:20.885 11:45:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:20.885 11:45:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:20.885 11:45:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:20.885 11:45:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:20.885 11:45:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:20.885 11:45:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:20.885 11:45:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:20.885 11:45:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:20.885 11:45:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:20.886 11:45:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:20.886 11:45:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:20.886 11:45:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:20.886 11:45:24 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:20.886 11:45:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:20.886 11:45:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:21.145 /tmp/:spdk-test:key0 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:21.145 11:45:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:21.145 11:45:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:21.145 11:45:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:21.145 11:45:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:21.145 11:45:24 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:21.145 11:45:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:21.145 11:45:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:21.145 11:45:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:21.145 /tmp/:spdk-test:key1 00:21:21.145 11:45:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85603 00:21:21.145 11:45:24 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:21.145 11:45:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85603 00:21:21.145 11:45:24 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85603 ']' 00:21:21.145 11:45:24 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.145 11:45:24 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.145 11:45:24 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.145 11:45:24 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.145 11:45:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:21.145 [2024-07-12 11:45:24.465259] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:21:21.145 [2024-07-12 11:45:24.465654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85603 ] 00:21:21.403 [2024-07-12 11:45:24.608140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.403 [2024-07-12 11:45:24.757003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.403 [2024-07-12 11:45:24.812298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:22.336 11:45:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:22.336 [2024-07-12 11:45:25.503617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.336 null0 00:21:22.336 [2024-07-12 11:45:25.535496] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.336 [2024-07-12 11:45:25.535744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.336 11:45:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:22.336 388327125 00:21:22.336 11:45:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:22.336 841004544 00:21:22.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.336 11:45:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85621 00:21:22.336 11:45:25 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:22.336 11:45:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85621 /var/tmp/bperf.sock 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85621 ']' 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.336 11:45:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:22.336 [2024-07-12 11:45:25.619216] Starting SPDK v24.09-pre git sha1 aebb775b1 / DPDK 24.03.0 initialization... 00:21:22.336 [2024-07-12 11:45:25.619334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85621 ] 00:21:22.336 [2024-07-12 11:45:25.759268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.594 [2024-07-12 11:45:25.898096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.160 11:45:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.160 11:45:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:23.160 11:45:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:23.160 11:45:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:23.726 11:45:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:23.726 11:45:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:23.984 [2024-07-12 11:45:27.197744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:23.984 11:45:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:23.984 11:45:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:24.242 [2024-07-12 11:45:27.517024] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.242 nvme0n1 00:21:24.242 11:45:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:24.242 11:45:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:24.242 11:45:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:24.242 11:45:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:24.242 11:45:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:24.242 11:45:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.500 11:45:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:24.500 11:45:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:24.500 11:45:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:24.500 11:45:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.500 11:45:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.500 11:45:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:24.500 11:45:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:24.759 11:45:28 keyring_linux -- keyring/linux.sh@25 -- # sn=388327125 00:21:24.759 11:45:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:24.759 11:45:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:24.759 11:45:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 388327125 == \3\8\8\3\2\7\1\2\5 ]] 00:21:24.759 11:45:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 388327125 00:21:24.759 11:45:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:24.759 11:45:28 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:25.018 Running I/O for 1 seconds... 00:21:25.953 00:21:25.953 Latency(us) 00:21:25.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.953 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:25.953 nvme0n1 : 1.01 9861.18 38.52 0.00 0.00 12908.68 3619.37 16562.73 00:21:25.953 =================================================================================================================== 00:21:25.953 Total : 9861.18 38.52 0.00 0.00 12908.68 3619.37 16562.73 00:21:25.953 0 00:21:25.953 11:45:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:25.953 11:45:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:26.211 11:45:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:26.211 11:45:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:26.211 11:45:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:26.211 11:45:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:26.211 11:45:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:26.211 11:45:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:26.468 11:45:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:26.468 11:45:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:26.468 11:45:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:26.468 11:45:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:26.468 11:45:29 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:21:26.468 11:45:29 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:26.468 11:45:29 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:26.468 11:45:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.468 11:45:29 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:26.468 11:45:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.468 11:45:29 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:26.468 11:45:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:26.726 [2024-07-12 11:45:30.111759] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:26.726 [2024-07-12 11:45:30.112098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f3460 (107): Transport endpoint is not connected 00:21:26.726 [2024-07-12 11:45:30.113078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f3460 (9): Bad file descriptor 00:21:26.726 [2024-07-12 11:45:30.114074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:26.726 [2024-07-12 11:45:30.114112] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:26.726 [2024-07-12 11:45:30.114125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:26.726 request: 00:21:26.726 { 00:21:26.726 "name": "nvme0", 00:21:26.726 "trtype": "tcp", 00:21:26.726 "traddr": "127.0.0.1", 00:21:26.726 "adrfam": "ipv4", 00:21:26.726 "trsvcid": "4420", 00:21:26.726 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.726 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:26.726 "prchk_reftag": false, 00:21:26.726 "prchk_guard": false, 00:21:26.726 "hdgst": false, 00:21:26.726 "ddgst": false, 00:21:26.726 "psk": ":spdk-test:key1", 00:21:26.726 "method": "bdev_nvme_attach_controller", 00:21:26.726 "req_id": 1 00:21:26.726 } 00:21:26.726 Got JSON-RPC error response 00:21:26.726 response: 00:21:26.726 { 00:21:26.726 "code": -5, 00:21:26.726 "message": "Input/output error" 00:21:26.726 } 00:21:26.726 11:45:30 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:21:26.726 11:45:30 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:26.726 11:45:30 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:26.726 11:45:30 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@33 -- # sn=388327125 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 388327125 00:21:26.726 1 links removed 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@33 -- # sn=841004544 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 841004544 00:21:26.726 1 links removed 00:21:26.726 11:45:30 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85621 00:21:26.726 11:45:30 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85621 ']' 00:21:26.726 11:45:30 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85621 00:21:26.726 11:45:30 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:26.726 11:45:30 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.726 11:45:30 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85621 00:21:26.984 killing process with pid 85621 00:21:26.984 Received shutdown signal, test time was about 1.000000 seconds 00:21:26.984 00:21:26.984 Latency(us) 00:21:26.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.984 =================================================================================================================== 00:21:26.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.984 11:45:30 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:26.984 11:45:30 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:26.984 11:45:30 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85621' 00:21:26.984 11:45:30 keyring_linux -- common/autotest_common.sh@967 -- # kill 85621 00:21:26.984 11:45:30 keyring_linux -- common/autotest_common.sh@972 -- # wait 85621 00:21:27.275 11:45:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85603 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85603 ']' 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85603 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85603 00:21:27.275 killing process with pid 85603 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85603' 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@967 -- # kill 85603 00:21:27.275 11:45:30 keyring_linux -- common/autotest_common.sh@972 -- # wait 85603 00:21:27.540 00:21:27.540 real 0m6.747s 00:21:27.540 user 0m13.198s 00:21:27.540 sys 0m1.617s 00:21:27.540 11:45:30 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:27.540 ************************************ 00:21:27.540 END TEST keyring_linux 00:21:27.540 ************************************ 00:21:27.540 11:45:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:27.540 11:45:30 -- common/autotest_common.sh@1142 -- # return 0 00:21:27.540 11:45:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:21:27.540 11:45:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:21:27.540 11:45:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:21:27.540 11:45:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:21:27.540 11:45:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:21:27.540 11:45:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:21:27.540 11:45:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:21:27.540 11:45:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.540 11:45:30 -- common/autotest_common.sh@10 -- # set +x 00:21:27.799 11:45:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:21:27.799 11:45:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:21:27.799 11:45:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:21:27.799 11:45:30 -- common/autotest_common.sh@10 -- # set +x 00:21:29.699 INFO: APP EXITING 00:21:29.699 INFO: killing all VMs 00:21:29.699 INFO: killing vhost app 00:21:29.699 INFO: EXIT DONE 00:21:29.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:29.956 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:29.956 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:30.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.520 Cleaning 00:21:30.520 Removing: /var/run/dpdk/spdk0/config 00:21:30.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:30.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:30.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:30.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:30.520 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:30.520 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:30.520 Removing: /var/run/dpdk/spdk1/config 00:21:30.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:30.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:30.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:30.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:30.520 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:30.520 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:30.778 Removing: /var/run/dpdk/spdk2/config 00:21:30.778 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:30.778 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:30.778 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:30.778 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:30.778 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:30.778 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:30.778 Removing: /var/run/dpdk/spdk3/config 00:21:30.778 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:30.778 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:30.778 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:30.778 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:30.778 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:30.778 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:30.778 Removing: /var/run/dpdk/spdk4/config 00:21:30.778 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:30.778 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:30.778 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:30.778 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:30.778 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:30.778 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:30.778 Removing: /dev/shm/nvmf_trace.0 00:21:30.778 Removing: /dev/shm/spdk_tgt_trace.pid58714 00:21:30.778 Removing: /var/run/dpdk/spdk0 00:21:30.778 Removing: /var/run/dpdk/spdk1 00:21:30.778 Removing: /var/run/dpdk/spdk2 00:21:30.778 Removing: /var/run/dpdk/spdk3 00:21:30.778 Removing: /var/run/dpdk/spdk4 00:21:30.778 Removing: /var/run/dpdk/spdk_pid58569 00:21:30.778 Removing: /var/run/dpdk/spdk_pid58714 00:21:30.778 Removing: /var/run/dpdk/spdk_pid58901 00:21:30.778 Removing: /var/run/dpdk/spdk_pid58993 00:21:30.778 Removing: /var/run/dpdk/spdk_pid59015 00:21:30.778 Removing: /var/run/dpdk/spdk_pid59130 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59148 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59266 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59457 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59597 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59667 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59742 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59829 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59900 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59939 00:21:30.779 Removing: /var/run/dpdk/spdk_pid59974 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60036 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60124 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60557 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60609 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60660 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60676 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60743 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60765 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60832 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60848 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60893 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60911 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60957 00:21:30.779 Removing: /var/run/dpdk/spdk_pid60975 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61097 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61133 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61202 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61259 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61283 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61342 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61382 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61411 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61451 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61480 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61520 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61549 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61589 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61618 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61658 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61687 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61727 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61756 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61796 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61831 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61867 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61902 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61939 00:21:30.779 Removing: /var/run/dpdk/spdk_pid61977 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62011 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62049 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62119 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62206 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62509 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62532 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62563 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62582 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62593 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62623 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62631 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62652 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62671 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62690 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62710 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62730 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62744 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62759 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62784 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62798 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62819 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62838 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62852 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62867 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62903 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62917 00:21:30.779 Removing: /var/run/dpdk/spdk_pid62952 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63010 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63044 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63054 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63082 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63092 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63105 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63147 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63161 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63195 00:21:30.779 Removing: /var/run/dpdk/spdk_pid63199 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63214 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63223 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63233 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63248 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63252 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63267 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63294 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63322 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63337 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63360 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63375 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63391 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63427 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63444 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63471 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63478 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63491 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63493 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63506 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63514 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63521 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63534 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63603 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63650 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63760 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63794 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63839 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63859 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63881 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63890 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63927 00:21:31.037 Removing: /var/run/dpdk/spdk_pid63948 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64018 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64039 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64089 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64155 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64211 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64240 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64326 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64374 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64412 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64625 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64727 00:21:31.037 Removing: /var/run/dpdk/spdk_pid64751 00:21:31.037 Removing: /var/run/dpdk/spdk_pid65071 00:21:31.037 Removing: /var/run/dpdk/spdk_pid65109 00:21:31.037 Removing: /var/run/dpdk/spdk_pid65400 00:21:31.037 Removing: /var/run/dpdk/spdk_pid65810 00:21:31.037 Removing: /var/run/dpdk/spdk_pid66081 00:21:31.037 Removing: /var/run/dpdk/spdk_pid66865 00:21:31.037 Removing: /var/run/dpdk/spdk_pid67675 00:21:31.037 Removing: /var/run/dpdk/spdk_pid67797 00:21:31.037 Removing: /var/run/dpdk/spdk_pid67859 00:21:31.037 Removing: /var/run/dpdk/spdk_pid69114 00:21:31.037 Removing: /var/run/dpdk/spdk_pid69326 00:21:31.037 Removing: /var/run/dpdk/spdk_pid72687 00:21:31.037 Removing: /var/run/dpdk/spdk_pid72986 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73096 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73230 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73263 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73285 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73313 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73405 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73545 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73696 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73777 00:21:31.037 Removing: /var/run/dpdk/spdk_pid73969 00:21:31.037 Removing: /var/run/dpdk/spdk_pid74049 00:21:31.037 Removing: /var/run/dpdk/spdk_pid74147 00:21:31.037 Removing: /var/run/dpdk/spdk_pid74453 00:21:31.037 Removing: /var/run/dpdk/spdk_pid74832 00:21:31.037 Removing: /var/run/dpdk/spdk_pid74834 00:21:31.037 Removing: /var/run/dpdk/spdk_pid75110 00:21:31.037 Removing: /var/run/dpdk/spdk_pid75129 00:21:31.037 Removing: /var/run/dpdk/spdk_pid75143 00:21:31.037 Removing: /var/run/dpdk/spdk_pid75178 00:21:31.037 Removing: /var/run/dpdk/spdk_pid75184 00:21:31.037 Removing: /var/run/dpdk/spdk_pid75484 00:21:31.037 Removing: /var/run/dpdk/spdk_pid75528 00:21:31.037 Removing: /var/run/dpdk/spdk_pid75802 00:21:31.037 Removing: /var/run/dpdk/spdk_pid76003 00:21:31.037 Removing: /var/run/dpdk/spdk_pid76384 00:21:31.037 Removing: /var/run/dpdk/spdk_pid76891 00:21:31.037 Removing: /var/run/dpdk/spdk_pid77709 00:21:31.037 Removing: /var/run/dpdk/spdk_pid78288 00:21:31.037 Removing: /var/run/dpdk/spdk_pid78294 00:21:31.037 Removing: /var/run/dpdk/spdk_pid80191 00:21:31.037 Removing: /var/run/dpdk/spdk_pid80252 00:21:31.037 Removing: /var/run/dpdk/spdk_pid80312 00:21:31.037 Removing: /var/run/dpdk/spdk_pid80372 00:21:31.037 Removing: /var/run/dpdk/spdk_pid80494 00:21:31.037 Removing: /var/run/dpdk/spdk_pid80550 00:21:31.037 Removing: /var/run/dpdk/spdk_pid80609 00:21:31.037 Removing: /var/run/dpdk/spdk_pid80672 00:21:31.037 Removing: /var/run/dpdk/spdk_pid80988 00:21:31.037 Removing: /var/run/dpdk/spdk_pid82131 00:21:31.037 Removing: /var/run/dpdk/spdk_pid82281 00:21:31.037 Removing: /var/run/dpdk/spdk_pid82518 00:21:31.037 Removing: /var/run/dpdk/spdk_pid83064 00:21:31.037 Removing: /var/run/dpdk/spdk_pid83223 00:21:31.037 Removing: /var/run/dpdk/spdk_pid83380 00:21:31.037 Removing: /var/run/dpdk/spdk_pid83478 00:21:31.037 Removing: /var/run/dpdk/spdk_pid83642 00:21:31.037 Removing: /var/run/dpdk/spdk_pid83751 00:21:31.037 Removing: /var/run/dpdk/spdk_pid84406 00:21:31.037 Removing: /var/run/dpdk/spdk_pid84436 00:21:31.037 Removing: /var/run/dpdk/spdk_pid84471 00:21:31.037 Removing: /var/run/dpdk/spdk_pid84724 00:21:31.037 Removing: /var/run/dpdk/spdk_pid84755 00:21:31.037 Removing: /var/run/dpdk/spdk_pid84795 00:21:31.038 Removing: /var/run/dpdk/spdk_pid85217 00:21:31.038 Removing: /var/run/dpdk/spdk_pid85234 00:21:31.038 Removing: /var/run/dpdk/spdk_pid85490 00:21:31.038 Removing: /var/run/dpdk/spdk_pid85603 00:21:31.038 Removing: /var/run/dpdk/spdk_pid85621 00:21:31.038 Clean 00:21:31.295 11:45:34 -- common/autotest_common.sh@1451 -- # return 0 00:21:31.295 11:45:34 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:21:31.295 11:45:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.295 11:45:34 -- common/autotest_common.sh@10 -- # set +x 00:21:31.295 11:45:34 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:21:31.295 11:45:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.295 11:45:34 -- common/autotest_common.sh@10 -- # set +x 00:21:31.295 11:45:34 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:31.295 11:45:34 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:31.295 11:45:34 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:31.295 11:45:34 -- spdk/autotest.sh@391 -- # hash lcov 00:21:31.295 11:45:34 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:31.295 11:45:34 -- spdk/autotest.sh@393 -- # hostname 00:21:31.295 11:45:34 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:31.613 geninfo: WARNING: invalid characters removed from testname! 00:21:58.158 11:46:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:01.442 11:46:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:03.977 11:46:07 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:06.509 11:46:09 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:09.800 11:46:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:12.330 11:46:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:14.877 11:46:18 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:14.877 11:46:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:14.877 11:46:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:14.877 11:46:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.877 11:46:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.877 11:46:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.877 11:46:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.877 11:46:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.877 11:46:18 -- paths/export.sh@5 -- $ export PATH 00:22:14.877 11:46:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.877 11:46:18 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:14.877 11:46:18 -- common/autobuild_common.sh@444 -- $ date +%s 00:22:14.877 11:46:18 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720784778.XXXXXX 00:22:14.877 11:46:18 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720784778.5pxAxY 00:22:14.877 11:46:18 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:22:14.877 11:46:18 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:22:14.877 11:46:18 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:14.877 11:46:18 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:14.877 11:46:18 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:14.877 11:46:18 -- common/autobuild_common.sh@460 -- $ get_config_params 00:22:14.877 11:46:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:14.877 11:46:18 -- common/autotest_common.sh@10 -- $ set +x 00:22:14.877 11:46:18 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:14.877 11:46:18 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:22:14.877 11:46:18 -- pm/common@17 -- $ local monitor 00:22:14.877 11:46:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:14.877 11:46:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:14.877 11:46:18 -- pm/common@21 -- $ date +%s 00:22:14.877 11:46:18 -- pm/common@25 -- $ sleep 1 00:22:14.877 11:46:18 -- pm/common@21 -- $ date +%s 00:22:14.877 11:46:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720784778 00:22:14.877 11:46:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720784778 00:22:14.877 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720784778_collect-vmstat.pm.log 00:22:14.877 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720784778_collect-cpu-load.pm.log 00:22:15.834 11:46:19 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:22:15.834 11:46:19 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:15.834 11:46:19 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:15.834 11:46:19 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:15.834 11:46:19 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:15.834 11:46:19 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:15.834 11:46:19 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:15.834 11:46:19 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:15.834 11:46:19 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:15.834 11:46:19 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:15.834 11:46:19 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:15.834 11:46:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:15.834 11:46:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:15.834 11:46:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:15.834 11:46:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:15.834 11:46:19 -- pm/common@44 -- $ pid=87350 00:22:15.834 11:46:19 -- pm/common@50 -- $ kill -TERM 87350 00:22:15.834 11:46:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:15.834 11:46:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:15.834 11:46:19 -- pm/common@44 -- $ pid=87352 00:22:15.834 11:46:19 -- pm/common@50 -- $ kill -TERM 87352 00:22:15.834 + [[ -n 5110 ]] 00:22:15.834 + sudo kill 5110 00:22:17.221 [Pipeline] } 00:22:17.239 [Pipeline] // timeout 00:22:17.246 [Pipeline] } 00:22:17.266 [Pipeline] // stage 00:22:17.272 [Pipeline] } 00:22:17.290 [Pipeline] // catchError 00:22:17.300 [Pipeline] stage 00:22:17.302 [Pipeline] { (Stop VM) 00:22:17.316 [Pipeline] sh 00:22:17.592 + vagrant halt 00:22:21.830 ==> default: Halting domain... 00:22:27.106 [Pipeline] sh 00:22:27.386 + vagrant destroy -f 00:22:31.570 ==> default: Removing domain... 00:22:31.582 [Pipeline] sh 00:22:31.903 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:31.913 [Pipeline] } 00:22:31.932 [Pipeline] // stage 00:22:31.939 [Pipeline] } 00:22:31.958 [Pipeline] // dir 00:22:31.964 [Pipeline] } 00:22:31.983 [Pipeline] // wrap 00:22:31.991 [Pipeline] } 00:22:32.007 [Pipeline] // catchError 00:22:32.015 [Pipeline] stage 00:22:32.017 [Pipeline] { (Epilogue) 00:22:32.032 [Pipeline] sh 00:22:32.313 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:38.889 [Pipeline] catchError 00:22:38.891 [Pipeline] { 00:22:38.906 [Pipeline] sh 00:22:39.188 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:39.188 Artifacts sizes are good 00:22:39.198 [Pipeline] } 00:22:39.217 [Pipeline] // catchError 00:22:39.230 [Pipeline] archiveArtifacts 00:22:39.238 Archiving artifacts 00:22:39.374 [Pipeline] cleanWs 00:22:39.388 [WS-CLEANUP] Deleting project workspace... 00:22:39.388 [WS-CLEANUP] Deferred wipeout is used... 00:22:39.395 [WS-CLEANUP] done 00:22:39.398 [Pipeline] } 00:22:39.417 [Pipeline] // stage 00:22:39.424 [Pipeline] } 00:22:39.443 [Pipeline] // node 00:22:39.451 [Pipeline] End of Pipeline 00:22:39.476 Finished: SUCCESS